In Part 1, we delved into the world of the Programmer. In Part 2, we looked at The Program’s inner world and explored what it means to exist. Now, in this final part, we will talk about how the world around us (The Program) is parsed and interpreted.
I’m sure that at some point in Your life You have encountered the following statement:
“Perception is reality.”
Love it or hate it, the concept is extremely philosophically rich. In the context of this essay, a solipsistic approach to programming, it is a fundamental* truth.
*Stick with me here. I am explicitly stating that given only a single fact (the premise of this post): One single mind-world can be known to exist (Your mind), as Your reality consists only of the perception of Your sensory inputs. Interpret this as a philosophical supposition, not a social or cultural one. (That would be a completely different essay.)
So far we have looked into the programmer’s reality, or perception thereof. Now let’s flip things around a bit and instead ponder the perception (or taken generally, the perceived world) of Your program. What can this possibly mean? Well, it only takes a tiny bit of reasoning, and perhaps a pinch of imagination, to figure out how and what a program perceives. Here is a probably-mostly-exhaustive list under most circumstances*:
- IO – Input/Output – Pretty obvious, huh? There are several types of IO, and no generic requirement to include any or all of them. Most commonly we’re talking about Standard Streams.
- Standard Input – stuff that gets typed (usually) in a command line environment (terminal)
- Standard Output/Standard Error – text-based output generated by the program, again typically in a command line environment (terminal)
- File IO – an IO stream that reads/writes from/to a (semi) persistent data source (hard drive, flash card)
- Network IO – an IO stream that reads/writes from/to a (semi) persistent data source over one or more additional “layers” of abstraction (protocols); easy example: a cloud drive 😉
- GPIO – General Purpose Input Output – fancy hardware pins on Integrated circuits that can interface with the physical world outside of the computer; bonus: they’re software defined and controlled
- Peripheral Devices – usually hardware-based things like keyboards, mice, touch pads, touch screens, sensors, joysticks, button panels, etc.
- this stuff typically interfaces with the computer using a high-speed dedicated serial communication port; USB and its variants are the most ubiquitous, but GPIO can also be used, as well as other dedicated ports
- RAM – Random Access Memory – bits hang out here and are available for Your program (and other running programs) to read and write; RAM typically represents the current state of a running program (there are several kinds of RAM)
- System RAM – those tasty GBs that are listed as part of every product You see nowadays
- CPU Cache(s) – special RAM reserved for making things move quickly between the CPU and the System RAM
- Registers – tiny (only measured in raw Bytes and bits) memory slots in the CPU itself; this is the lowest level of computing and is also where the magic smoke lives
*It really depends how granular You want to go here. There are also all sorts of special cases and alternative uses for some of this stuff that could technically be categorized differently. For the purposes of this document, though, we only need a general overview.
So the point is what, exactly?
The point is that You must consider where Your program gets its information, and how such data is being presented/retrieved. When developing a program that must interact with the world in some way, it will be up to You to determine how to handle this appropriately. Your program will perceive its environment in ways that You do not have personal experience in handling. You are defining Your program’s external world for it. There are many things to consider, but probably the most important is the discrete nature of a computer.
You (assuming You are human), are receiving sensory data in a nice, smooth, continuous fashion. Your mind (brain) performs all sorts of gnarly calculus to figure out what’s going on and then feeds chunks of this info to Your neural wetware, where it classifies and interprets the whole mess into a conscious perception of reality.
Contrast this with something discrete like a film. (Ignore the perception of a motion picture here. It’s just a trick anyway.) A film is physically a set of discrete, individual images that capture “slices” of states over time (typically around 30 per second). These slices are also called samples. This is quite different from the human experience. And despite ever-increasing frame rates, a film will only ever capture discrete, individual slices of a continuous reality.
(Aside 1: Soooo, going completely off the rails here, there are some theories that suggest material reality is, at its most fundamental level, discrete. In other words, the Universe is quantized. This is “holy $#1t” territory. If You are in the mood for a complete mindf!&k, check this out.)
A digital computer is discrete, even when operating parallel computations (threads). The architecture of a modern CPU is based around a clock, where each “tick” advances the computational state of the system. It might be crazy fast (3 or more GHz – Giga Hertz – cycles per second – >3 Billion clock ticks in each second), but it is still discrete. There are no operations between ticks; there is a finite period of time between each tick. The computer lives tick-to-tick.
There are no discrete samples for a person’s senses.
(Aside 2: There are frequency/duration limits to perception, and some really fascinating research into human consciousness takes it even further by measuring neural activation in response to really fleeting events, and shows that the senses/brain are capable of receiving all kinds of stuff, but it’s not always passed along to the conscious mind and registered as a perception. I highly recommend this book for more on that subject.)
The main concept that You should take away from this section is that “discrete” and “continuous” are very different things. If You are not careful when designing a program that reads from some external source, or from a continuous abstraction like a Stream, Your program may miss data entirely, or calculate strange things.
Sampling is kind of like converting Floats to Ints: You lose information as a result; You’re capturing a subset of the whole. A bunch of smart fellas in the early 1900s independently figured out that at minimum, one needs to sample at twice the frequency of the fastest events they are interested in capturing. (There is a really math-heavy formula proof to this, and it is commonly referred to as Nyquist Sampling Theorem.) You don’t need to derive the proof to understand what this means. Here’s an example:
If You want to accurately capture events from some source, You need to sample the source at least twice as often as the events occur. –> Say the events happen 10 times per second. Your program needs to check the sensor data at least 20 times every second in order to avoid missing events. That’s the gist.
There are all sorts of other issues that contribute to the difficulty of the discrete domain. Sometimes they aren’t so obvious and can be anywhere on the technical spectrum:
- Event handling is susceptible to sampling issues – You could find that Your UI events aren’t getting processed consistently as a result (missing clicks/keyboard inputs, etc.)
- Asynchronous code/callbacks – stuff takes more time than expected; bad polling (sampling) misses things
- Concurrency/Multi-threading – all sorts of wacky fun here (deadlock, starvation, race conditions, ordering)
- Jitter – when Your clock isn’t running on time
- Latency – when things take longer than expected, typically over networks or external, electrical connections
So, yeah, perception is a big deal to Your program (not to mention Your employers and customers. hey-o!).
And we haven’t even yet talked about how the program can alter its own environment and change its own perceptions. But this article is already hella long. So, just think about that.
I Execute, Therefore I Am, or The Program as Solipsist
Let’s return to an earlier question: “Is a program a thing at all if it is not executed?”
Does a program exist if it is not being executed? Certainly its symbolic representation does. That alone can be used as a tool to understand things like the programmer’s intent and ontological view of what the program’s world should be. It can also be used as the vehicle of exploration the programmer uses to learn and refine beliefs into certainties.
But we’re still left with a rather unsatisfying lack of clarity on knowing whether or not a program is a Program if it is not being interpreted by a computer. Let’s ponder this for a moment:
The executing program is the only thing certain to exist. Its mind-world consists entirely of discrete state, IO processing, and persistent storage. Everything else may or may not be real. There is no programmer.
Here we suppose the executing program as the only “conscious” entity to verifiably exist. From this position, it seems that the written/symbolic representation of a program is irrelevant. It is only an inert collection of symbols. Only when the symbols are interpreted and executed does the program “become” real.
Two things nag at me about this situation:
- the discrete-ness of the computing/execution mechanism
- the property of homoiconicity (which has not previously been discussed)
The first deals with the fact that there is no continuous existence for the program’s execution. The process is simply the changing of state from one moment to the next. Although the moments are short, there is neither necessity nor guarantee that another moment will follow a previous one. Is each computed state (each snapshot of execution) an instance of the Program, an individual existence? Or is the Program to be understood as a certain number of discrete moments taken together? Is there some minimum number of moments before it is considered to exist?
The program itself can not know when it will complete (for reasons) whether or not it has access to its own source code (symbolic representation). It seems we’re left with an existential conundrum similar to determining “how many grains of sand constitute a ‘heap'” (see Sorites Paradox). Perhaps the answer lies in considering a temporal-based definition. But then the question is “how long?” (how many clock cycles) does it need to run before it is considered to exist. We could just say the answer is one in this case. The program is representable as one perturbation of the computer’s state. It kind of seems like a cop-out, though. It likely takes many more than a single cycle to even ‘load’ up enough instructions to alter the computer’s state. Does a single increment of a CPU’s program counter constitute a program’s execution?
The second bothersome point is that of the possibility that the program’s symbolic representation might posses the property of homoiconicity. In other words, the syntax (or lack thereof) and the structure of the language are such that the program itself represents its own interpreted structure. Some use the phrase “code as data.” In this case the program can examine itself via its symbolic representation. To take this a little further, the source code and the interpreted instructions it represents are one in the same. Now the symbolic representation is definitely not irrelevant. So can only homoiconic programs be said to exist? Hrm. That doesn’t feel right either. It just removes some need for transformation between the symbolic representation and the computable structure, but something still has to read, parse, interpret, and carry out the instructions.
Backed into a corner
So what are we to conclude from this section? Well, one thing is that determining the definition of ‘existence’ is freaking hard. And as a programmer, it may not be obvious that this is an important exercise.
Ultimately, we need to understand that creating a program is a bonafide act of creation. And as this creation becomes a part of our mind-world, we must realize the impact it has on our reality. Let’s leave this with the following definition of a program:
A Program is the set of its symbolic representations, its discrete computed state(s), its cumulative impact over time, its persisted/serialized output, the process of reasoning about it and editing it, and its influence on its own perception (how it presents itself to other entities/systems).
Kind of a mouthful. The TL;DR is that a program isn’t just the source code. As a programmer, You should reflect on programming as an act of creation and an alteration of Your own reality.
Ultimately, nothing lasts forever. Well, perhaps energy itself does, but entropy sees to it that no information will be preserved indefinitely. (Heat Death of the Universe is perhaps the most depressing thing ever proposed.)
Whether we’re talking about a running program or simply the legacy of programs and source code You leave behind over the course of Your career/life, some day Your super amazing thing will be forgotten. For Your own sanity, here’s a small piece of advice: try to focus less on being clever or impressing the social horde. Consider the implications of Your choices and the effect they have on Your own reality, a reality in which You assume You are the only conscious mind.
This will move You to critical thought about Your perceptions, Your knowledge, and how You communicate. And when it turns out that there are other conscious beings after all, You’ll be that much better equipped for success.
Much is being done in the world of artificial intelligence, which has long grappled with philosophical questions and interpretations. Here are a couple links that I find rather interesting on the subject:
Admittedly, the actual philosophical content of this three part essay is a little superficial. The purpose of it was to alter Your perspective on what it is to be a programmer, and not provide an academically rigorous overview of the philosophical concepts mentioned throughout.
I’m afraid I’m leaving this essay with a rather anti-climactic ending. You probably have more questions than You started out with. But that’s a good thing! Be curious. Seek answers. Question Your reality. There’s more to programming than syntax, math, pull requests, and stand ups.
You are an artist.
- I am not a professional philosopher, nor is this an academic paper. A lot of this is just me thinking “out loud”. There are plenty of inline links for the reader to explore the concepts on their own.
- Big words and technical terms are presented here with the goal of making them less scary for those who were previously unfamiliar with them.
- You will continue to see technical terms during Your career. If You’re unfamiliar, just ask or look ’em up. It’s the concepts that matter.
- You will often be pleasantly surprised that You already have experience with a concept once learning the “official esoteric” vocabulary.
- Programming is an act of creation and a form of art that influences our perceptions of the world(s) we inhabit.
- You are not an imposter. Learning takes time and practice.
- You might be the only real person to exist.
Tagged with: essay • improvement • philosophy