Future applications of computers to your area of expertise
Is a self-discussion Richard Hamming suggests in his book The Art of Doing Science and Engineering. First, what are my areas of expertise? Given my degree and work, computers, first and foremost! The future applications of computers to computers – well, sounds like an attempt to predict the future. Time to gaze into the crystal ball.
Programming Languages
The goal of computing is to minimize the distance between thought and execution. This is what so-called high-level languages are for – expressibility. Computers are automated symbol manipulators, not calculators. The latter view merely mechanizes the teams of human computers that were prevalent until the advent of the electric computer; the former admits qualitative differences. As such, a good general advance in automatic symbol manipulation would be to automate more of it – to broaden it -, so that us human can focus on expressing the essence of any given program.
A relevant quote from Software Design for Flexibility: “Traditionally, programmers have not been able to design as architects. In very elaborate languages … the parti is tightly mixed with the elaborations. The ‘served spaces,’ the expressions that actually describe the desired behaviour, are horribly conflated with the ‘servant spaces’ such as the type declarations, the class declarations, and the library imports and exports.” In other words, a very high level language might allow us to sweep many of the details – that currently seem like they deserve specification – under the rug, to expose core thought better. I am reminded of the Viewpoint Research Institute and their goal of having a “full system” in under 20K LoC.
Prediction 1: Very High Level Languages will let us sweep more accidental complexity under the rug and focus on inherent complexity instead
Expressing the essential parts of a program involves suitable notation, as not all problems are suited for the same notation. Once you have a suitable notation, it also becomes easier to explore a problem-space. Of course, notation we feed to a machine must be executable somehow – which means better support for designing executable notations.
Prediction 2: Better tooling/languages for implementing notations (other languages, DSLs, language workbenches, what have you)
A downside of separate programming languages, use of DSLs, etc. is that interoperability is challenging. Ideally, you’d like to write different parts of a system in different notations that fit the different problems those parts address, and have the whole system work with as minimal an amount of goo as necessary. But if I recall correctly, language interoperability is still a challenge in language-oriented programming in Racket, for example.
Prediction 3: Better tooling for language interoperability (virtual machines, foreign code interfaces, transpilers, what have you)
Unrelated to the above, but synergistic (since code is data is code is data), higher-level concerns and abundant resources may lead to less separation between files and code. It’s clumsy that we can’t just store some data structure straight to disk, and load/share/send/inspect it as easily as a text file. Say you have a program error: just save the current stack to disk for later inspection, and then get the program into a runnable state again without further delay. Open up the stack in a different program to figure out what went wrong. Why not? (The performance overhead of having introspection available?)
Prediction 4: Blur the lines between program, code, data, files – a much richer computing environment
Something that might facilitate the above would be first class environments (environments being, of course, just a data structure themselves): Trivially, first class environments would mean I could implement closure support in SPICLUM and persist anonymous functions. That’s the hacky approach, though – but it’s a stepping stone.
Prediction 5: First class environments, giving more power to languages (prediction 1) and facilitating the above blurring (prediction 4)
Hardware
The above predictions for programming languages all introduce overhead compared with programming in current “high level” languages, which are closer to the metal (sand?). Hence, the fulfillment of the above prediction require that that overhead not be a problem. Roughly, this can happen because of efficient enough implementations, and because of abundant machine resources. More computers, more parallelization and scheduling of work, faster computers.
Prediction 6: Ubiquitous computing makes machine resources a non-issue
Prediction 7: Relatively affordable renting of super computers as necessary
There’s nothing particularly special or superb about von Neumann architecture, and von Neumann himself worked on alternatives. We can imagine computers based on any number of computational models. Lambda calculus computers, neural network machines. Of course, they can simulate each other, but the physical implementation might affect the efficiency of running computations on the physical machines. Further, simple computers in physical matter might lead to new materials.
Prediction 8: Proliferation of non-von Neumann machines, with their own pros and cons
Prediction 9: Programmable matter – claytronics, utility fog, swarm robotics, etc. become possible
The proliferation of non-von Neumann machines might make parallelism more natural – at the moment, multi-core computation is essentially an add-on to the basic serial model of the von Neumann machines. Distributed computing already wrestles with “real parallelism” with limited views of the whole system etc. I presume real parallelism will require on local properties and emergent effects of organization, similar to biological structures. Each cell just does its own stuff while co-operating with neighbours.
Prediction 10: Hardware that “really” supports parallelism
Prediction 11: Systems organized along self-organizing/flexible local-property behaviours
In a different direction, we can imagine taking inspiration from GPUs and producing special-purpose chips for other common computational tasks than just graphics and general-purpose computing. Hence, we might see not just CPUs (central processing units) and GPUs (graphics processing units) but also … SOPUs (some other processing units).
Prediction 12: Efficiency gains through identifying other common computational tasks and making specialized hardware
Decentralization
It’s obvious that a lot of computer use so far has been based on mimicking existent processes and approaches. A prime example is the “desktop” metaphor for personal computing – based on the existent physical approach to offices, rather than optimizing for the natural fit of the computer. As if the Gutenberg Press’s purpose were to save monasteries time spent copying books. The fact that we’re in the early stages of the computer age means that current social organization is unstable. Automation makes jobs superfluous; automation makes institutions superfluous. Hence, institutions must either cling to power or submit to decentralization. Generally, we submit to technology: We’ve allowed automobiles to ruin our cities, for example.
Prediction 13: Decentralization will hit full force, and e.g. allow direct ochlocracy
As digital assets become more important (digital real estate, ownership, virtual worlds, cryptocurrency, etc.), and with encryption easier than decryption, nation states’ monopoly on the threat of violence becomes less useful. Fragmentation of society into bubbles, with virtual space for them to exist apart, might lead to a reworked model of physical offerings by states – particularly with a focus on opt-in frameworks. This seems a natural consequence of automation and decentralization – intervention to stop that development is also possible, of course.
Prediction 14: Formal education gets replaced by real skill/knowledge patchwork learning (as not learning from the best becomes a losing proposition)
Prediction 15: Cyberspace eclipses meatspace
Prediction 16: Direct ochlocracy, decentralized finance, decentralized education, social fragmentation, bubbles, opt-in states, physical as gateway (a good number of people already pretend to live in the physical world while really living in cyberspace)
Prediction 17: Walled gardens become untenable because of effectively being opt-in manacles
VR
A possible consequence – and strong support – of the above decentralization, of course, is “real” VR. Brain interfaces would be the ultimate immersion, but that’s a big step. More immediate, haptic feedback of any kind might be good enough – it’d surprise me if we don’t see a massive growth in VR usage once haptics become available. Using your hands will always be more immediate than using a controller, after all.
Prediction 18: Haptic feedback makes VR interesting and allows more fine-grained control of actions in cyberspace
Prediction 19: VR as the natural next-step fragmentation of society through metaverses
Prediction 20: VR industries – model-based development, simulations, etc.
Conclusion
Beyond the first section on programming languages, the above predictions chart what I find likely, not what I necessarily find preferable. I do enjoy a vision of automatic symbol manipulation augmenting human endeavours – machines have many advantages over humans. But it seems that too often, technological possibilities dictate our actions rather than augmenting the actions we ought to take. The capacity for surveillance brings about the use of surveillance. The capacity for distraction and hedonism… Seems to me we’re more likely to develop into virtual navel-gazers than star-gazers over the near horizon.
Anyhow, I’m not staking my future on the above predictions. This is a thought experiment; I have no skin in the game.