Today we’re going to create memory! Using the basic logic gates we discussed in episode 3, we can build a circuit that stores a single bit of information, and then through some clever scaling (and of course many new levels of abstraction) we’ll show you how we can construct the modern random-access memory, or RAM, found in our computers today.
From polygon count and meshes, to lighting and texturing, there are a lot of considerations in building the 3D objects we see in our movies and video games, but then displaying these 3D objects of a 2D surface adds an additional number of challenges. So we’ll talk about some of the reasons you see occasional glitches in your video games as well as the reason a dedicated graphics processing unit, or GPU, was needed to meet the increasing demand for more and more complex graphics.
We begin our discussion of computer graphics. So, we ended the last episode (Keyboards and Command Line Interfaces: Crash Course Computer Science #22) with the proliferation of command line (or text) interfaces, which sometimes used screens, but typically electronic typewriters or teletypes onto paper. But by the early 1960s, a number of technologies were introduced to make screens much more useful, from cathode ray tubes and graphics cards to ASCII art and light pens. This era would mark a turning point in computing - computers were no longer just number-crunching machines, but potential assistants interactively augmenting human tasks. This was the dawn of graphical user interfaces which we’ll cover more in a few episodes.
We’re going to talk about how computers see. We’ve long known that our digital cameras and smartphones can take incredibly detailed images, but taking pictures is not quite the same thing. For the past half-century, computer scientists have been working to help our computing devices understand the imagery they capture, leading to advancements everywhere, from tracking hands and whole bodies to biometrics to unlock our phones.
You will be familiar with computer graphics from games, films, and images, and there is amazing software available to create images, but how does the software work? The role of a computer scientist is not just to use graphics systems, but to create them, and especially invent new techniques.
The entertainment industry is always trying to develop new graphics software so that they can push the boundaries and create new experiences. We've seen this in the evolution of animated films, from simple 2D films to realistic computer-generated movies with detailed 3D images. The names of dozens of computer scientists now regularly appear in the credits for films that use CGI or animation, and some have even won Oscars for their innovative software!
Movie and gaming companies can't always use existing software to make the next great thing – they need computer scientists to come up with better graphics techniques to make something that's never been seen before. The creative possibilities are endless!
Computer graphics are used in a wide variety of situations: games and animated movies are common examples, but graphics techniques are also used to visualize large amounts of data (such as all cellphone calls being made in one day or friend connections in a social network), to display and animate graphical user interfaces, to create virtual reality and augmented reality worlds, and much more.
When computers were first developed, the only way they could interact with the outside world was through the input that people wired or typed into them. Digital devices today often have cameras, microphones, and other sensors through which programs can perceive the world we live in automatically. Processing images from a camera, and looking for interesting information in them, is what we call computer vision.
With increases in computer power, the decrease in the size of computers and progressively more advanced algorithms, computer vision has a growing range of applications. While it is commonly used in fields like healthcare, security, and manufacturing, we are finding more and more uses for them in our everyday life, too.
Computers are machines that do stuff with information. They let you view, listen, create, and edit information in documents, images, videos, sound, spreadsheets, and databases. They let you play games in simulated worlds that don’t really exist except as information inside the computer’s memory and displayed on the screen. They let you compute and calculate with numerical information; they let you send and receive information over networks. Fundamental to all of this is that the computer has to represent that information in some way inside the computer’s memory, as well as storing it on disk or sending it over a network.
To make computers easier to build and keep them reliable, everything is represented using just two values. You may have seen these two values represented as 0 and 1, but on a computer, they are represented by anything that can be in two states. For example, in memory, a low or high voltage is used to store each 0 or 1. On a magnetic disk, it's stored with magnetism (whether a tiny spot on the disk is magnetized north or south).
This chapter will examine how data is stored on computers, be it text, images, colors, etc.
Data compression reduces the amount of space needed to store files. If you can halve the size of a file, you can store twice as many files for the same cost, or you can download the files twice as fast (and at half the cost if you're paying for the download). Even though disks are getting bigger and high bandwidth is becoming common, it's nice to get even more value by working with smaller, compressed files. For large data warehouses, like those kept by Google and Facebook, halving the amount of space taken can represent a massive reduction in the space and computing required, and consequently big savings in power consumption and cooling, and a huge reduction in the impact on the environment.
Common forms of compression that are currently in use include JPEG (used for photos), MP3 (used for audio), MPEG (used for videos including DVDs), and ZIP (for many kinds of data). For example, the JPEG method reduces photos to a tenth or smaller of their original size, which means that a camera can store 10 times as many photos, and images on the web can be downloaded 10 times faster.
So what's the catch? Well, there can be an issue with the quality of the data – for example, a highly compressed JPEG image doesn't look as sharp as an image that hasn't been compressed. Also, it takes processing time to compress and decompress the data. In most cases, the tradeoff is worth it, but not always.
In this lesson, students are introduced to the standard units for measuring the sizes of digital files, from a single byte, all the way up to terabytes and beyond. Students begin the lesson by comparing the size of a plain text file containing “hello” to a Word document with the same contents. Students are introduced to the units kilobyte, megabyte, gigabyte, and terabyte, and research the sizes of files they make use of every day, using the appropriate terminology. This lesson foreshadows an investigation of compression as a means for combatting the rapid growth of digital data.
The simple purposes of this lesson are:
The 8-bit byte has become the de-facto fundamental unit with which we measure the “size” of data on computers, and in fact, today most computers only let you save data as combinations of whole bytes; even if you only want to store 1 bit of information, you have to use a whole byte to do it. And many computer systems will require you store even more than that. Messages sent over the Internet are also typically structured as messages with byte-offsets.
Paralleling the explosion of computing power and speed, the sheer size of the digital data now created and consumed every day is staggering. Units of measure (terabytes) that previously seemed unfathomably large are now making their way into personal computing. This rapid growth of digital data presents many new opportunities and also poses new challenges to engineers and programmers. The implications of so-called Big Data will not be investigated until later in the course, but it's good and interesting to be thinking about the size of things now.
Students will be able to:- use appropriate terminology when describing the size of digital files.- identify and compare the size of familiar digital media.- solve small word problems that require reasoning about file sizes.
Note: You will need to create a free account on code.org before you can view this resource.
In this lesson, students will begin to explore the way digital images are encoded in binary.
Students learn the difference between lossy and lossless compression by experimenting with a simple lossy compression widget for compressing text. Students then research three real-world compressed file formats to fill in a research guide. Throughout the process, they review the skills and strategies used to research computer science topics online, in particular, to cope with situations when they don't have the background to fully understand everything they're reading (a common situation even for experienced CS students).
The first goal of this lesson is straightforward: understand what lossy compression is and when/why it might be used. Students should see a number of examples of this distinction throughout the lesson and should leave the lesson being able to describe the relative benefits of each.
The second goal of this lesson is to build up students' research skills both for the project they will complete in the next lesson and for the Explore PT at the end of the year. Students will need practice finding reliable sources, reading technical articles, and synthesizing information. The teacher's role in calling out the skills being used, not merely the facts being found, is significant.
Students will be able to:- explain the difference between lossy and lossless compression.- explain the relative benefits or drawbacks of different file formats, particularly in terms of how they compress information.- identify reliable sources of information when doing research.- explain the difference between open source and licensed software.