Today we’re going to create memory! Using the basic logic gates we discussed in episode 3, we can build a circuit that stores a single bit of information, and then through some clever scaling (and of course many new levels of abstraction) we’ll show you how we can construct the modern random-access memory, or RAM, found in our computers today.
We begin our discussion of computer graphics. So, we ended the last episode (Keyboards and Command Line Interfaces: Crash Course Computer Science #22) with the proliferation of command line (or text) interfaces, which sometimes used screens, but typically electronic typewriters or teletypes onto paper. But by the early 1960s, a number of technologies were introduced to make screens much more useful, from cathode ray tubes and graphics cards to ASCII art and light pens. This era would mark a turning point in computing - computers were no longer just number-crunching machines, but potential assistants interactively augmenting human tasks. This was the dawn of graphical user interfaces which we’ll cover more in a few episodes.
This video will discuss some psychological considerations in building computers, like how to make them easier for humans to use, the uncanny valley problem when humanoid robots get more and more humanlike, and strategies to make our devices work better with us by incorporating our emotions and even altering our gaze.
Today, we’re going to take a look at how computers use a stream of 1s and 0s to represent all of our data - from our text messages and photos to music and webpages. We’re going to focus on how these binary values are used to represent numbers and letters and discuss how our need to perform operations on larger and more complex values brought us from our 8-bit video games to beautiful Instagram photos, and from the unreadable garbled text in our emails to a universal language encoding scheme.