Selected Passages from The Art of Doing Science and Engineering

Below are my notes from Richard Hamming's The Art of Doing Science and Engineering, published by Stripe Press. While there are some direct quotes, I've mostly rewritten the main points from the book in my own words. When I interject with a thought of my own or reference another work, I'll use an indented quote.


Hamming refers to teaching the style of science and engineering. Since style is often a type of art, he concludes that he should teach it the same way art teachers do: by letting their students experiment through trial and error. Hamming believes style is the difference between an average person and a great one.

This seems related to Paul Graham's concept of taste and design.

Teachers often prepare students for the past rather than the future because "no one can know the future". This is a lazy excuse and it is still the responsibility of the teacher to try and prepare students for the projected future as best they can.

Art is best communicated by examining it from many sides and doing so repeatedly. While the book itself is not intended to be technical, still uses math. This should provide a deeper understanding than words alone.

Introduction

The past is much more uncertain than usually recognized due to false reporting. Don't be fooled by historical accounts because they are discussing a time that already happened – they can have as much error as projections of the future.

Using mathematics as a way to test assumptions and create back-of-the-napkin estimates provides deep insights into beliefs. It is still possible to express these as words, but they are much less powerful.

I am reminded of Model Thinking and advice from Scott Page on becoming a Many Model Thinker. The true purpose of models is to clearly define assumptions and be able to see how changing them affects the expected outcome.

Orientation

This chapter sets expectations for students and readers. It is filled with life lessons, examples of mathematical models to aid thinking, and the distinction between science and engineering.

The purpose of this book is to educate, not to train. Education is whatwhen, and why to do things. Training is how to do it.

Hamming emphasizes that the course is concerned with style. The best way to teach style is to provide many examples and examine the works of others.

Using firsthand knowledge while teaching is often accompanied by a taboo of "bragging." This is unfortunate because firsthand experience is one of the best ways to imprint a lesson onto someone's mind. It is important to change the mind to be an effective educator.

You need to do the work to make education useful – it cannot be absorbed passively. Once again, painting is referenced here as an analogy. It is important to study under an experienced master, but style cannot be learned merely from copying the past.

Style does not necessarily translate across time. What worked for one master may not work for another. This means that you need to have an idea of what the future will look like if you are to develop an effective style.

Enter: back-of-the-envelope calculations! These types of rough calculations are performed much more often by great scientists and engineers than run-of-the-mill people.

When you hear a public remark in the press or media try using quantitative analysis to verify what they said is true.

This reminds me of Pauli's Not even wrong comment. Also, this recent essay from Ben Evans.

Not only is new knowledge growing quickly, but old knowledge is just as quickly becoming obsolescent! This doubling applies not only to knowledge, but any content created by human thought including music, local knowledge, media, etc.

To combat this wave of knowledge it is necessary to focus on fundamentals and the ability to learn (hence the subtitle of the book "Learning How to Learn"). A way to determine if some knowledge is fundamental is if it has existed a long time.

The difference between science and engineering:

  • In science, if you know what you are doing, you should not be doing it.
  • In engineering, if you do not know what you are doing, you should not be doing it.

It is rare to see either science or engineering in a pure state, but this glib characterization is helpful to distinguish where to spend your time.

Science and engineering are increasingly growing together. This means it will be even more important in the future to master emerging fields on your own without being passively taught.

Often our ability to predict the future is not limited by physical laws, but rather by human-made laws, habits, organizational rules, regulations, personal egos, and inertia. Engineers are not trained to grapple with these problems as much as they should be to make change happen.

Another reason why it is difficult to predict the future is because unforeseen technological inventions can completely upset the most careful predictions.

Despite how difficult it can be to predict the future, The best way to make meaningful progress is to have a vision that you can work towards. Put simply: no vision, not much of a future.

Having an active imagination can help to create meaningful visions of the future. This requires setting aside dedicated time - say, Friday afternoons - to focus on trying to understand what will happen in the future.

Try to answer the following questions when predicting the future:

  • What is possible? (Science)
  • What is likely to happen? (Engineering)
  • What is desirable to happen? (Ethics)

The standard practice of organizing knowledge into departments and fields tends to conceal the homogeneity of knowledge. It also causes knowledge to fall between the cracks.

Computers have the following advantages compared to humans:

  • Economics: cheaper
  • Speed: faster
  • Accuracy: more precise
  • Reliability: built-in error correction
  • Freedom from boredom
  • Bandwidth: far higher
  • Ease of retraining: can write a new program instead of unlearning, relearning, etc.

Trying to achieve excellence in a specific area is a worthy goal in itself. The true gain is in the struggle and not the achievement. As Socrates said: the unexamined life is not worth living.

Foundations of the digital (discrete) revolution

This chapter covers the reasons behind the digital revolution, how to measure growth and innovation, and offers a way to think about the tradeoffs of specialization.

Many signals in nature are continuous (musical sounds, heights and weights, distances covered, velocities, densities, etc). To deal with this, we usually convert these continuous signals into sampled discrete signals that can be more easily transmitted and analyzed.

The digital revolution is the widespread conversion of continuous signals into discrete signals. It is happening because:

  • Discrete signals are easier to losslessly transmit (repeaters vs. amplifiers). Computers are beneficial to help us transmit information through space (signaling) and through time (storage).
  • The invention of transistors and integrated circuits. These inventions massively decreased the cost of computing by allowing for more effective transmission of information through space and time.
  • Society is moving from a material goods society to an information service society. Instead of organizing atoms, people are now organizing bits – storing information in a material form.
  • Computers make it possible for robots to do many things that were previously reserved for humans. Robots are defined as devices that handle and control things in the material world.
    • For instance, robots in manufacturing are capable of the following:
      • Produce a better product under tighter control limits
      • Produce a cheaper product
      • Produce a different product

It has rarely proved practical to produce exactly the same product by machines as we produced by hand.

If you want to succeed in undergoing a digital transformation as an organization you must get the essentials of the job in mind. Only then can you design the mechanization to do the job rather than trying to mechanize the current version.

And always, he fought the temptation to choose a clear, safe course, warning 'That path leads ever down into stagnation.

Computers are increasing the pace of knowledge by decreasing the cost of running experiments (i.e. simulations).

This became clear at Los Alamos during the development of the nuclear bomb – a small scale experiment was not possible so (very primitive) computers were used.

The amount of experiments done in simulation will far exceed the number of physical experiments performed due to the difference in cost.

Despite the fact that simulations are cheaper it is still important to look to nature for inspiration.

Computers are allowing engineering to improve by designing and building far more complex things. Computers are now often an essential component of good design.

Computers have given management the ability to micromanage organizations, often to the detriment of performance. Central planning often underperforms decentralized decision making in large organizations.

Computers are further invading entertainment and information fields like sports, media, and traditional businesses that rely on information retrieval (e.g. airline bookings).

Computers are influencing military operations by placing a premium on information.

The growth of most fields follows an S-shaped curve. The simplest model of growth assumes the rate of growth is proportional to the current size (e.g. compound interest, bacterial and human population growth).

The growth of a field is determined by the level of sustained innovation. This allows new S-curves to take off from around the saturation level of the old one. Since The amount of knowledge produced has a constant k proportional to the number of scientists alive, fields with more scientists should see more sustained innovation.

Electrical engineering will become about (1) knowing which chips are available off the shelf, (2) how to put them together, and (3) how to write programs for them.

General purpose chips have the following advantages over special chips:

  • Other users of the chip will help to find errors or weaknesses
  • Other users will help to write the manuals needed to use it
  • Other users (incl. manufacturers) will suggest upgrades of the chip, creating a steady stream of upgrades with little effort
  • Inventory will not be a problem

Technical obsolescence is the other side of the coin for innovation. By simply using off the shelf chips and writing better programs you can achieve massive performance upgrades over time. Beware of special-purpose chips!

This Moore's Law is not as good as the old one. Moore's Law used to mean that if your software was slow, all you had to do was wait, and the inexorable progress of hardware would solve your problems. Now if your software is slow you have to rewrite it to do more things in parallel, which is a lot more work than waiting. — via Frighteningly Ambitious Startup Ideas

History of computers – hardware

This chapter covers the origins of computing hardware, beginning with primitive counting tools, discussing Von Neumann machines, and then projecting into the future.

The history of computing stretches back to primitive man. For example, Stonehenge was built in three phases, 1900 - 1700, 1700 - 1500, and 1500 - 1400 BC, that closely aligned with astronomical phenomenon. Similarly, Chinese, Indian, and Mexican cultures were known to have built great observatories that we still know little about.

Note: It would be fascinating to understand the tools used in modern day Mexico before the Mayan collapse.

The sand pan and abacus were early example of instrument specifically designed for computing.

Arabic numerals faced significant resistance and were made illegal before finally rising to prominence due to practicality and economic advantages. New types of computing hardware or concepts are often met with resistance.

The invention of logarithms was an important step that allowed multiplication to be simplified through an analog device. A slide rule was used with numbers on the parts as lengths proportional to the logs of numbers. Adding two lengths of the slide rule resulted in the same output as multiplying the two numbers.

Slide rules used to be extremely popular, especially the ten-inch log log decitrig slide rule. It is no longer manufactured! 3-D Printing project, anyone?

The differential analyzer was another important analog computing machine. The earliest version of the device was made at MIT in the 1930s and it featured both mechanical and electronic interconnections. Applications included guided missile trajectories.

    • WWII sparked the use the electronic analog computers for military applications. These computers used condensers instead of mechanical wheels and balls as integrators. Like most military applications of this time, electric analog computers were invented by physicists who had been recruited for the war effort.

Kepler, Pascal, and Schickard were involved in the creation of a desk calculator as early as 1623. Leibnitz also experimented with computers that were capable of multiplication and division.

Babbage is widely recognized as the father of modern computing. He invented the difference engine, a device which evaluated polynomials using sequences of addition and subtraction.

During his work on the difference engine, Babbage conceived of the Analytical Engine which had more similarities to the current von Neumann design.

Punched-card computing was originally invented to deal with the growing complexity of the federal census. In the 1880's it was determined that the current census would not be completed until the next one began. Therefore, a machine method was needed.

IBM then built the 601 mechanical punch, which did multiplication and additions, averaging a multiplication every two or three seconds. About 1,500 of these were available for rental and used at Los Alamos to compute the designs for the first atomic bombs.

The ENIAC was built for the U.S. Army and delivered in 1946. It had about 18,000 vacuum tubes and took up nearly an entire room.

Turing's Cathedral by George Dyson provides a fantastic overview of this effort.

Similar to the spark of creativity that happened to Babbage while working on the difference engine, the creators of the ENIAC envisioned a larger machine during the process of building their original creation.

The EDVAC was the successor to the ENIAC and the creators Mauchly and Eckert gave an open talk on the topic which sparked the subsequent wave of innovation in the field. Design of subsequent systems is often sparked by open lectures from those who were involved in the creation of the original invention.

Military funding played a large role in the computer revolution.

The physical limitations in the development of computers includes:

  • Heat dissipation
  • Distance between gates
  • Speed of light

The saturation point for single-processor machines can be solved by parallelizing tasks across machines. It is not likely that a single design will emerge for a standard parallel computer architecture.

The history of computers is almost a perfect example of an S-curve: a very slow start, followed by a rapid rise, and finally the inevitable saturation and plateau.

The purpose of computing is insight, not numbers.

Computers are composed of two-state devices for storing bits of information and gates which allow signals to pass through or be blocked.

These parts are assembled to create longer arrays of bits which can be used as number registers.

The most basic computer is composed of:

  • a storage device
  • a central control (containing a Current Address Register or CAR)
  • an ALU unit

These components work together as follows:

  • Get the address of the next instruction from the CAR.
  • Go to that address in storage and get that instruction.
  • Decode and obey that instruction.
  • Add 1 to the CAR address, and start in again.

It is important to note that computers have no sense of what has happened or what is coming next - they simply respond to instructions as they are programmed. It is human beings which attach meaning to bits.

From what we understand of the human mind, it acts in a similar way. Does that mean we are just advanced machines? We will return to this point in the chapters on AI.

History of computers – software

The control unit of computers used to be operated by humans. Eventually, we replaced humans with electric motors and plug board wiring that instructed machines where to find information, what to do with it, and where to put the answers on punch cards.

Plug boards were usually specially wired for certain jobs and would be reused in accounting offices during cycles.

Punched paper tapes were used next to feed inputs into machines, but these were difficult and messy due to the glue. Computers had very little internal storage at this point so the tapes were used to save outputs of computations, as well as feed the next steps of instructions.

The ENIAC was the first computer controlled by plug board wiring. It was eventually reprogrammed from the ballistic tables (large racks of dials that controlled decimal digits of the program)

Internal programming became available when storage increased enough to hold programs in memory. This idea is frequently attributed to John Von Neumann who was a consultant to the ENIAC project at the time.

Early machines were typically "one address", meaning there was an instruction and a single address where the result was to be found or sent to.

This provided to be extraordinarily challenging to avoid bugs and conflicts.

Original programming was done in binary – programmers wrote the address in binary as well as the instructions.

This led to the creation of the octal, or grouping of binary digits into threes, and the hexadecimal which grouped digits into fours.

Errors were fixed by moving the preceding instruction to an empty area of memory, adding the instructions you want to insert, and then telling the program to go back to the original sequence to "overwrite" the error. This was ... problematic, with all sorts of strange jumps of the control to strange places.

This constraint (frustrated programmers) led to the idea of libraries, or reusable pieces of code. At first, they used absolute address libraries, but this meant that the routine had to occupy the same locations in storage (hardcoded!!). Eventually, this became too large and they switched to a tactic called relocatable programs.

These techniques are located in something called the "Von Neumann Reports" which were never formally published.

Eventually a Symbolic Assembly Language program was developed. This was initially seen as a "waste of capacity" to translate the language into binary.

This drastically improved productivity, although it abstracted the decision of where to put things in storage away from the individual programmer.

FORmula TRANslation, or FORTRAN, was initially rejected by nearly all programmers. New types of computing hardware or concepts are often met with resistance.

Despite the obvious advantages to using FORTRAN over other programming languages, uptake was slow. Almost all professionals are slow to use their own expertise for their own work.

Monitors were originally referred to as "the system" and were not immediately obvious. This is because most users with enough expertise to actually create the device were too close to the problem to notice it was missing. To see what is obvious often takes an outsider.

Throughout the history of software there was a trend going from absolute to virtual machines.

  • Actual code instructions were replaces by programming language.
  • Addresses were replaced by relocatable (dynamic) addresses.
  • Allocation of bits into specific areas of memory were handled by machines.

This trend buffered the users from the machines by providing higher levels of abstraction. It also resulted in software that was not dependent on the machine.

FORTRAN was so successful because it translated thinking into code, instead of requiring users to learn a completely new way of thinking. FORTRAN outlasted other languages (like Algol) despite shortcomings. Psychologically designed programming languages have an advantage over more powerful, but less readable logically designed languages.

This kicked off a flurry of activity into special languages, or problem-oriented languages (POLs). These failed to gain traction due to incompatibility and steep learning curves for adoption.

LISP was created by John McCarthy in 1962 as a language for theoretical purposes, with the compiler for LISP written in itself.

Paul Graham covers this well in "The Roots of Lisp"

The IBM 650 was a two-drum machine operated in fixed-decimal point. It became obvious that floating point was necessary for research purposes, but the 650 did not support this functionality. An obscure section in Appendix D of the EDSAC described a program called an interpreter capable of getting a large program into small storage.

The original authors of the interpreter did not seem to understand what they had done. Almost everyone who opens a new field does not really understand it the way the followers do. Creators have to fight through so many dark difficulties that it obscures their view of the solution. Newton was the last of the ancients, not the first of the moderns.

When designing a new programming language, consider the four rules:

  • Easy to learn.
  • Easy to use.
  • Easy to debug.
  • Easy to use subroutines.

Subroutines define the meaning of the language.

Consider how Lisp is built on axioms, like geometry.

Again, "The Roots of Lisp"

Most programmers who write new languages write logical languages, since they tend to be logical people. These languages tend to be full of "one-liners" and lack redundancy.

This poses a problem, because it's not how we communicate. Spoken language is ~60% redundant. Written language is ~40% redundant.

Low redundancy results in many undetected errors. Judge a language by how well it fits the human.

The eventual end-goal is that the person experiencing the problem does the programming, without an "expert" in the middle.

This is why Paul Graham recommends startups founders to work on problems that they themself have.

Until we have a better understanding of human communication, it is unlikely our software problems will vanish.

Programming is closer to novel writing than engineering.

The best way to improve your productivity as a programmer is to think before you write the program.

Different programmers vary significantly in productivity. The best policy to deal with this is to regularly fire the poor programmers.

Creativity

Creativity, originality, and novelty are often used interchangeably but they actually mean very different things.

Creativity is often suppressed in large or old organizations due to the stabilizing effect of elder "wisdom."

Creativity is not simply doing something new - you could just multiply two large numbers together to do that - but must include value to some group of people.

Creative works are often not recognized at the time of creation. This is so common in art that it has become a stereotype: the misunderstood artist.

Creativity is the ability to put together things which were not perceived to be related before in a way that creates value.

How creative something is can be measured by how psychologically distant the two concepts were prior to the act of creation.

Creativity seems to be driven by the mindset the creator is in, although many of our processes designed to promote creativity (e.g. brainstorming) do exactly the opposite!

Creativity is like sex: you can read all the books on it you want, but without direct experience you will have little chance of understanding how it works.

Creativity usually follows a process:

  • You recognize a problem
  • You refine the problem over time and get emotionally involved
  • You think about the problem intensely for a while, then stop working on it
  • Your subconscious begins working on the problem
  • A moment of inspiration strikes that helps you to further refine the problem or devise a solution

The most useful tool in creativity is the analogy.

When you learn something new, try to apply what you've learned to other fields.

Richard Feynman: “You have to keep a dozen of your favorite problems constantly present in your mind, although by and large they will lay in a dormant state. Every time you hear or read a new trick or a new result, test it against each of your twelve problems to see whether it helps. Every once in a while there will be a hit, and people will say, ‘How did he do it? He must be a genius!’”

Becoming more creative means changing yourself, not your surroundings.

Maintaining creativity over a long time period means being willing to let go of certain problems to make room for others.

Einstein was creative in his early years, but got stuck on finding a unified theory. Consider how early age is a handicap in highly creative fields like math and physics, but an asset in fields like music composition and literature.

The time to come up with creative ideas is when you are young - the time to apply them is when you are older.