These points present very compelling reasons to believe that we may never achieve Strong A. Perhaps even the most accurate of brain simulations will not yield minds, nor will software programs produce consciousness. It just might not be in the cards for a strict binary processor. There is nothing about processing symbols or computation that generates subjective experience or psychological phenomena like qualitative sensations. Upon hearing this, one might be inclined to ask, "If a computer can't be conscious, then how can a brain?
It even uses electrical activity to process information, just like a computer. Yet somehow we experience the world subjectively--from a first person perspective where inner, qualitative and ineffable sensations occur that are only accessible to us. Take for example the way it feels when you see a pretty girl, drink a beer, step on a nail, or hear a moody orchestra.
The truth is, scientists are still trying to figure all this out. Even neuroscientist and popular author Sam Harris--who shares Musk's robot-rebellion concerns--acknowledges the hard problem when stating that whether a machine could be conscious is "an open question". Unfortunately he doesn't seem to fully realize that for machines to pose an existential threat arising from their own self-interests, conscious is required.
Yet although the problem of consciousness is admittedly hard, there is no reason to believe that it is not solvable by science. So what kind of progress have we made so far? Consciousness Is A Biological Phenomenon Much like a computer, neurons communicate with one another through exchanging electrical signals in a binary fashion.
Either a neuron fires or it doesn't, and this is how neural computations are carried out. But unlike digital computers, brains contain a host of analogue cellular and molecular processes, biochemical reactions, electrostatic forces, global synchronized neuron firing at specific frequencies, and unique structural and functional connections with countless feedback loops. Even if a computer could accurately create a digital representation of all these features, which in itself involves many serious obstacles, a simulation of a brain is still not a physical brain.
There is a fundamental difference between the simulation of a physical process and the physical process itself. This may seem like a moot point to many machine learning researchers, but when considered at length it appears anything but trivial. In other words, the hardware of the "machine" matters, and mere digital representations of biological mechanisms have no power to cause anything to happen in the real world. Let's consider another biological phenomenon, like photosynthesis. Photosynthesis refers to the process by which plants convert light into energy. This process requires specific biochemical reactions only viable given a material that has specific molecular and atomic properties.
A perfect computer simulation--an emulation--of photosynthesis will never be able to convert light into energy no matter how accurate, and no matter what type of hardware you provide the computer with. However, there are in fact artificial photosynthesis machines. These machines do not merely simulate the physical mechanisms underlying photosynthesis in plants, but instead duplicate, the biochemical and electrochemical forces using photoelectrochemical cells that do photocatalytic water splitting.
A Higher Power
In a similar way, a simulation of water isn't going to possess the quality of 'wetness', which is a product of a very specific molecular formation of hydrogen and oxygen atoms held together by electrochemical bonds. Liquidity emerges as a physical state that is qualitatively different from that expressed by either molecule alone.
Even the hot new consciousness theory from neuroscience, Integrated Information Theory , makes very clear that a perfectly accurate computer simulation of a brain would not have consciousness like a real brain, just as a simulation of a black hole won't cause your computer and room to implode. Neuroscientists Giulio Tononi and Christof Koch, who established the theory , do not mince words on the subject:.
With this in mind, we can still speculate about whether non-biological machines that support consciousness can exist, but we must realize that these machines may need to duplicate the essential electrochemical processes whatever those may be that are occurring in the brain during conscious states. Instead of drawing such circuits repeatedly, computer designers rely on symbols to depict them. The utility of such symbols is demonstrated below for two kinds of computer activity: the transfer of an information bit from one register to another and the addition of two numbers by a half-adder.
The half-adder is called that because it can deal only with two numbers to be added and not with a "carry" from a previous stage. One such signal a 1' in this case is already provided by the condition of FF-A. But the second signal is not present until a 1-pulse is applied over the transfer line. When the second signal appears over the line in the form of a 1-pulse, the two inputs of the AND-gate are both properly primed and the bit stored in FF-A is duplicated in FF-B.
Thus, in effect, the bit has been transferred. In the half-adder shown, notice that the addition of 1 and 1 in binary arithmetic must produce a sum of 0 and a carry of 1. The circuit accomplishes this. Incidentally, two half-adders can be combined to form a full adder capable of handling a carry from a previous stage as well as two new numbers to be added.
Arithmetic units can be operated serially, with one pulse following the other in a single-file sequence, or in parallel, with the pulses stacked one over the other. Parallel operation is the faster method because more can be made to happen in a given time interval. We already have discussed three facilities basic to the processing of our inventory data — input, storage and arithmetic-logic. But we still have not processed the data. Nor, for that matter, have we established the traffic pattern needed to avert chaos in the computer. As indicated earlier, the machine must operate in a step-by-step fashion.
Thus, it must be told when to add or subtract, etc. It regulates the internal operations of each unit and the relations between them by means of electrical timing pulses that open and shut connecting gates in the required sequence. All operations in the computer take place in fixed time intervals measured by sections of a continuous train of pulses.
Shop by category
These basic pulses are sometimes provided by timing marks on a rotating drum, but more frequently they are generated by a free-running electronic oscillator called the "clock. The clock-beat sets the fundamental machine rhythm and synchronizes the auxiliary generators inside the computer. In a way, the clock is like a spinning wheel of fortune to which has been affixed a tab that touches an outer ring of tabs in sequence as the wheel revolves.
If a signal is present at an outer tab when the wheel tab gets there, the appropriate gate is opened. Each time-interval represents a cycle during which the computer carries out part of its duties. One machine operation can be set up for the computer during an instruction cycle I-time , for example, and then processed during an execution cycle E-time.
Now that the control unit has been "added," we have a machine to process information and problems like inventory control. But we still have to get the results out of the computer. The results, stored in the memory after processing, are extracted in the form of pulses and emerge as output "readouts" on devices that look similar to the input devices mentioned earlier. Among the output systems used are punched cards, magnetic tape, magnetic disks and paper tape. The great demand, however, is for a printed readout. Thus, a great variety of printers is being used for the output requirement. A recent surge of application development has created a need for other output forms, including graphic displays on cathode-ray tubes, signals for many kinds of control devices and even voice reproduction.
All the parts making up the computer, however, are interconnected by wires, printed circuits and gates through which pulse information flows as directed by instructions given to the computer. These instructions, or program, determine the circuits that will be called upon to solve a problem, the interactions between them and the sequence in which the various machine elements will be called into play. All modern digital computers can be used to help program themselves.
But man is the ultimate controller. The computer stands mute until the instructions that man has written are fed into the machine and the right buttons pressed. Happily, the programmer — or the computer operator for that matter — does not have to know all about the electronic circuitry inside the computer.
On the other hand, the programmer must know the organization of the machine, the kind of statements he can use in communicating with it and how to write these statements sequentially to get the computer to solve the problem he wants solved. The language used to communicate with the computer is called the "source language. Each source language has its vocabulary, its syntax and its repertoire of permissible instructions.
Use of Source Languages. Source languages are usually designated by acronyms. A statement in these languages is much less formidable than it would have to be if it were written directly in the code used by the computer hardware. Obviously, before a computer can deal with such an instruction, there must be some way to convert the source-language statement into the appropriate series of machine-language instructions. This type of program is called a "compiler. The memory may be regarded as consisting of a vast set of "mailbox"-type slots. Each slot has a designation number called an "address" and each is large enough to hold a stipulated number of digits.
Let us assume, for simplicity's sake, that there are 1, memory "mailboxes," or slots, with addresses ranging from No. Assume further that we are going to use five digits in each program instruction to represent both the operation to be carried out by the computer and the addresses of the slots. Since three digits are required for the address numbers, two are left to represent the type of operation we want the computer to carry out. Thus, under the above circumstances, a typical instruction would be based on a five-digit number broken down into two parts.
Understanding digital computers, by Ronald Benrey.
Bear in mind that the "noun" part of the above instruction refers to the address number and not to the numbers stored as data at that address. Actually, in designating the address as part of his instruction, the programmer is really interested in the information available at that address.
Getting back to inventories again, let us take the case of a warehouse where we want to keep track of 50 categories of items that are being stocked—with the number of items in each category having a maximum quantity that can be expressed in five digits. At the end of each day we want to know how many items we have left in each category. Beginning with the first category, we could place the "starting amount" of items on hand at Address No.
Arithmetically, we must add the information content of Address No. With this accomplished, we would like to keep current by putting the result back at Address No. For the "verb" part of the instructions, let us designate the number 16 for add only, 17 for reset-to-0 and add, 20 for subtract, 07 for store, 30 for transfer and 50 for print. The simplified program instructions, then, could read as follows:. Each of the instructions to the left above is a five-digit number and the five-digit capacity of any memory slot can store it.
The instruction-number groups could be loaded into the "mailboxes" at addresses beginning at, say, No. Hence we have the following addresses for the above instructions:. Once the machine is told—by the setting of switches on the computer console—to start executing instructions beginning at Address No. If we want the machine to print out the answer after completing the instruction at Address No. Now that one set of items has been processed, we want the computer to go on and do the same job automatically for the 49 other categories of items in the warehouse.
After the first category comes the second, so we might be tempted to put an instruction at Address No. Since 30 means transfer, this instruction would read:. But it would be best not to give such an instruction. For in going back to and proceeding as before to Nos. Let us suppose that the second-category information is actually stored at Address Nos. Thus, the information stored at , , and must be changed so the instructions will apply to the contents of Address Nos.
This can be accomplished because of the important fact that the instructions themselves are numbers and are therefore modifiable arithmetically. To modify the instruction at Address No.
The same must be done for the instructions stored at Address Nos. Consequently, we store a five-digit 1——at some memory address, say Address No. Then, forgetting about our last instruction for Address No. The add-1 procedure continues until the 50 categories of items have been processed and readouts printed for each category. After the 50th category is processed, the machine would come to a halt if the instructions contained such a command.
As we have dealt with the warehouse inventory problem above, the rather simple arithmetic requirements already have produced a more complex and longer set of instructions than might originally have been imagined. Additional instructions would actually be required to define the format of the printed output, including matters related to margins, column headings, dates, page numbering, and so on. But take heart. Remember that the computer itself has the built-in logical power, when properly programmed in a sound source language, to allow us to communicate with it in a language more like our own—whether it be English or the languages of science, engineering, mathematics or business.
When a source language is used, each source-language statement causes the compiler program for the language to generate sequences of machine-language instructions like those given for the warehouse inventory problem. All these processes have a name. Taking in information is called input , storing information is better known as memory or storage , chewing information is also known as processing , and spitting out results is called output.
Imagine if a computer were a person. Suppose you have a friend who's really good at math. She is so good that everyone she knows posts their math problems to her. Each morning, she goes to her letterbox and finds a pile of new math problems waiting for her attention. She piles them up on her desk until she gets around to looking at them.
Understanding Digital Computers by Paul Siegel - obeqiciqypit.tk
Each afternoon, she takes a letter off the top of the pile, studies the problem, works out the solution, and scribbles the answer on the back. She puts this in an envelope addressed to the person who sent her the original problem and sticks it in her out tray, ready to post.
Then she moves to the next letter in the pile.
You can see that your friend is working just like a computer. Her letterbox is her input; the pile on her desk is her memory; her brain is the processor that works out the solutions to the problems; and the out tray on her desk is her output. Once you understand that computers are about input, memory, processing, and output, all the junk on your desk makes a lot more sense:. Artwork: A computer works by combining input, storage, processing, and output. All the main parts of a computer system are involved in one of these four processes.
As you can read in our long article on computer history , the first computers were gigantic calculating machines and all they ever really did was "crunch numbers": solve lengthy, difficult, or tedious mathematical problems. Today, computers work on a much wider variety of problems—but they are all still, essentially, calculations. Everything a computer does, from helping you to edit a photograph you've taken with a digital camera to displaying a web page, involves manipulating numbers in one way or another.
Photo: Calculators and computers are very similar, because both work by processing numbers. However, a calculator simply figures out the results of calculations; and that's all it ever does. A computer stores complex sets of instructions called programs and uses them to do much more interesting things. Suppose you're looking at a digital photo you just taken in a paint or photo-editing program and you decide you want a mirror image of it in other words, flip it from left to right.
You probably know that the photo is made up of millions of individual pixels colored squares arranged in a grid pattern. The computer stores each pixel as a number, so taking a digital photo is really like an instant, orderly exercise in painting by numbers!
To flip a digital photo, the computer simply reverses the sequence of numbers so they run from right to left instead of left to right. Or suppose you want to make the photograph brighter.
All you have to do is slide the little "brightness" icon. The computer then works through all the pixels, increasing the brightness value for each one by, say, 10 percent to make the entire image brighter. So, once again, the problem boils down to numbers and calculations. What makes a computer different from a calculator is that it can work all by itself. You just give it your instructions called a program and off it goes, performing a long and complex series of operations all by itself. Back in the s and s, if you wanted a home computer to do almost anything at all, you had to write your own little program to do it.
For example, before you could write a letter on a computer, you had to write a program that would read the letters you typed on the keyboard, store them in the memory, and display them on the screen. Writing the program usually took more time than doing whatever it was that you had originally wanted to do writing the letter.
Pretty soon, people started selling programs like word processors to save you the need to write programs yourself. Today, most computer users rely on prewritten programs like Microsoft Word and Excel or download apps for their tablets and smartphones without caring much how they got there.
- Submission history.
- ENEE446: Digital Computer Design.
- What is a computer program?.
- Chromatographic Fingerprint Analysis of Herbal Medicines: Thin-layer and High Performance Liquid Chromatography of Chinese Drugs.
Hardly anyone writes programs any more, which is a shame, because it's great fun and a really useful skill. Most people see their computers as tools that help them do jobs, rather than complex electronic machines they have to pre-program. Some would say that's just as well, because most of us have better things to do than computer programming. Then again, if we all rely on computer programs and apps, someone has to write them, and those skills need to survive.
Thankfully, there's been a recent resurgence of interest in computer programming. There's a growing hobbyist movement, linked to build-it yourself gadgets like the Raspberry Pi and Arduino. And Code Clubs , where volunteers teach kids programming, are springing up all over the world. The beauty of a computer is that it can run a word-processing program one minute—and then a photo-editing program five seconds later. In other words, although we don't really think of it this way, the computer can be reprogrammed as many times as you like. This is why programs are also called software. They're "soft" in the sense that they are not fixed: they can be changed easily.
By contrast, a computer's hardware —the bits and pieces from which it is made and the peripherals , like the mouse and printer, you plug into it —is pretty much fixed when you buy it off the shelf. The hardware is what makes your computer powerful; the ability to run different software is what makes it flexible. That computers can do so many different jobs is what makes them so useful —and that's why millions of us can no longer live without them!
Suppose you're back in the late s, before off-the-shelf computer programs have really been invented. You want to program your computer to work as a word processor so you can bash out your first novel—which is relatively easy but will take you a few days of work.
Related Understanding Digital Computers
Copyright 2019 - All Right Reserved