How Should We Talk to AIs?

Not many years ago, the idea of having a computer broadly answer questions asked in plain English seemed like science fiction. But when we released Wolfram|Alpha in 2009 one of the big surprises (not least to me!) was that we’d managed to make this actually work. And by now people routinely ask personal assistant systems—many powered by Wolfram|Alpha—zillions of questions in ordinary language every day.

Ask questions in ordinary language, get answers from Wolfram|Alpha

It all works fairly well for quick questions, or short commands (though we’re always trying to make it better!). But what about more sophisticated things? What’s the best way to communicate more seriously with AIs?

I’ve been thinking about this for quite a while, trying to fit together clues from philosophy, linguistics, neuroscience, computer science and other areas. And somewhat to my surprise, what I’ve realized recently is that a big part of the answer may actually be sitting right in front of me, in the form of what I’ve been building towards for the past 30 years: the Wolfram Language.

Maybe this is a case of having a hammer and then seeing everything as a nail. But I’m pretty sure there’s more to it.  And at the very least, thinking through the issue is a way to understand more about AIs and their relation to humans.

Computation Is Powerful

The first key point—that I came to understand clearly only after a series of discoveries I made in basic science—is that computation is a very powerful thing, that lets even tiny programs (like cellular automata, or neural networks) behave in incredibly complicated ways. And it’s this kind of thing that an AI can harness.

A cellular automaton with a very simple rule set (shown in the lower left corner) that produces highly complex behavior

Looking at pictures like this we might be pessimistic: how are we humans going to communicate usefully about all that complexity? Ultimately, what we have to hope is that we can build some kind of bridge between what our brains can handle and what computation can do. And although I didn’t look at it quite this way, this turns out to be essentially just what I’ve been trying to do all these years in designing the Wolfram Language.

Language of Computational Thinking

I have seen my role as being to identify lumps of computation that people will understand and want to use, like FindShortestTour, ImageIdentify or Predict. Traditional computer languages have concentrated on low-level constructs close to the actual hardware of computers. But in the Wolfram Language I’ve instead started from what we humans understand, and then tried to capture as much of it as possible in the language.

In the early years, we were mostly dealing with fairly abstract concepts, about, say, mathematics or logic or abstract networks. But one of the big achievements of recent years—closely related to Wolfram|Alpha—has been that we’ve been able to extend the structure we built to cover countless real kinds of things in the world—like cities or movies or animals.

One might wonder: why invent a language for all this; why not just use, say, English? Well, for specific things, like “hot pink”, “new york city” or “moons of pluto”, English is good—and actually for such things the Wolfram Language lets people just use English. But when one’s trying to describe more complex things, plain English pretty quickly gets unwieldy.

Imagine for example trying to describe even a fairly simple algorithmic program. A back-and-forth dialog—“Turing-test style”—would rapidly get frustrating. And a straight piece of English would almost certainly end up with incredibly convoluted prose like one finds in complex legal documents.

The Wolfram Language specifies clearly and succinctly how to create this image. The equivalent natural-language specification is complicated and subject to misinterpretation.

But the Wolfram Language is built precisely to solve such problems. It’s set up to be readily understandable to humans, capturing the way humans describe and think about things. Yet it also has a structure that allows arbitrary complexity to be assembled and communicated. And, of course, it’s readily understandable not just by humans, but also by machines.

I realize I’ve actually been thinking and communicating in a mixture of English and Wolfram Language for years. When I give talks, for example, I’ll say something in English, then I’ll just start typing to communicate my next thought with a piece of Wolfram Language code that executes right there.

The Wolfram Language mixes well with English in documents and thought streams

Understanding AIs

But let’s get back to AI. For most of the history of computing, we’ve built programs by having human programmers explicitly write lines of code, understanding (apart from bugs!) what each line does. But achieving what can reasonably be called AI requires harnessing more of the power of computation. And to do this one has to go beyond programs that humans can directly write—and somehow automatically sample a broader swath of possible programs.

We can do this through the kind of algorithm automation we’ve long used in Mathematica and the Wolfram Language, or we can do it through explicit machine learning, or through searching the computational universe of possible programs. But however we do it, one feature of the programs that come out is that they have no reason to be understandable by humans.

Engineered programs are written to be human-readable. Automatically created or discovered programs are not necessarily human-readable.

At some level it’s unsettling. We don’t know how the programs work inside, or what they might be capable of. But we know they’re doing elaborate computation that’s in a sense irreducibly complex to analyze.

There’s another, very familiar place where the same kind of thing happens: the natural world. Whether we look at fluid dynamics, or biology, or whatever, we see all sorts of complexity. And in fact the Principle of Computational Equivalence that emerged from the basic science I did implies that this complexity is in a sense exactly the same as the complexity that can occur in computational systems.

Over the centuries we’ve been able to identify aspects of the natural world that we can understand, and then harness them to create technology that’s useful to us. And our traditional engineering approach to programming works more or less the same way.

But for AI, we have to venture out into the broader computational universe, where—as in the natural world—we’re inevitably dealing with things we cannot readily understand.

What Will AIs Do?

Let’s imagine we have a perfect, complete AI, that’s able to do anything we might reasonably associate with intelligence. Maybe it’ll get input from lots of IoT sensors. And it has all sorts of computation going on inside. But what is it ultimately going to try to do? What is its purpose going to be?

This is about to dive into some fairly deep philosophy, involving issues that have been batted around for thousands of years—but which finally are going to really matter in dealing with AIs.

One might think that as an AI becomes more sophisticated, so would its purposes, and that eventually the AI would end up with some sort of ultimate abstract purpose. But this doesn’t make sense. Because there is really no such thing as abstractly defined absolute purpose, derivable in some purely formal mathematical or computational way. Purpose is something that’s defined only with respect to humans, and their particular history and culture.

An “abstract AI”, not connected to human purposes, will just go along doing computation. And as with most cellular automata and most systems in nature, we won’t be able to identify—or attribute—any particular “purpose” to that computation, or to the system that does it.

Giving Goals for an AI

Technology has always been about automating things so humans can define goals, and then those goals can automatically be achieved by the technology.

For most kinds of technology, those goals have been tightly constrained, and not too hard to describe. But for a general computational system they can be completely arbitrary. So then the challenge is how to describe them.

What do you say to an AI to tell it what you want it to do for you? You’re not going to be able to tell it exactly what to do in each and every circumstance. You’d only be able to do that if the computations the AI could do were tightly constrained, like in traditional software engineering. But for the AI to work properly, it’s going to have to make use of broader parts of the computational universe. And it’s then a consequence of a phenomenon I call computational irreducibility that you’ll never be able to determine everything it’ll do.

So what’s the best way to define goals for an AI? It’s complicated. If the AI can experience your life alongside you—seeing what you see, reading your email, and so on—then, just like with a person you know well, you might be able to tell the AI at least simple goals just by saying them in natural language.

But what if you want to define more complex goals, or goals that aren’t closely associated with what the AI has already experienced? Then small amounts of natural language wouldn’t be enough. Perhaps the AI could go through a whole education. But a better idea would be to leverage what we have in the Wolfram Language, which in effect already has lots of knowledge of the world built into it, in a way that both the human and the AI can use.

AIs Talking to AIs

Thinking about how humans communicate with AIs is one thing. But how will AIs communicate with one another? One might imagine they could do literal transfers of their underlying representations of knowledge. But that wouldn’t work, because as soon as two AIs have had different “experiences”, the representations they use will inevitably be at least somewhat different.

And so, just like humans, the AIs are going to end up needing to use some form of symbolic language that represents concepts abstractly, without specific reference to the underlying representations of those concepts.

One might then think the AIs should just communicate in English; at least that way we’d be able to understand them! But it wouldn’t work out. Because the AIs would inevitably need to progressively extend their language—so even if it started as English, it wouldn’t stay that way.

In human natural languages, new words get added when there are new concepts that are widespread enough to make representing them in the language useful. Sometimes a new concept is associated with something new in the world (“blog”, “emoji”, “smartphone”, “clickbait”, etc.); sometimes it’s associated with a new distinction among existing things (“road” vs. “freeway”, “pattern” vs. “fractal”).

Often it’s science that gives us new distinctions between things, by identifying distinct clusters of behavior or structure. But the point is that AIs can do that on a much larger scale than humans. For example, our Image Identification Project is set up to recognize the 10,000 or so kinds of objects that we humans have everyday names for. But internally, as it’s trained on images from the world, it’s discovering all sorts of other distinctions that we don’t have names for, but that are successful at robustly separating things.

I’ve called these “post-linguistic emergent concepts” (or PLECs). And I think it’s inevitable that in a population of AIs, an ever-expanding hierarchy of PLECs will appear, forcing the language of the AIs to progressively expand.

But how could the framework of English support that? I suppose each new concept could be assigned a word formed from some hash-code-like collection of letters. But a structured symbolic language—as the Wolfram Language is—provides a much better framework. Because it doesn’t require the units of the language to be simple “words”, but allows them to be arbitrary lumps of symbolic information, such as collections of examples (so that, for example, a word can be represented by a symbolic structure that carries around its definitions).

So should AIs talk to each other in Wolfram Language? It seems to make a lot of sense—because it effectively starts from the understanding of the world that’s been developed through human knowledge, but then provides a framework for going further. It doesn’t matter how the syntax is encoded (input form, XML, JSON, binary, whatever). What matters is the structure and content that are built into the language.

Information Acquisition: The Billion-Year View

Over the course of the billions of years that life has existed on Earth, there’ve been a few different ways of transferring information. The most basic is genomics: passing information at the hardware level. But then there are neural systems, like brains. And these get information—like our Image Identification Project—by accumulating it from experiencing the world. This is the mechanism that organisms use to see, and to do many other “AI-ish” things.

But in a sense this mechanism is fundamentally limited, because every different organism—and every different brain—has to go through the whole process of learning for itself: none of the information obtained in one generation can readily be passed to the next.

But this is where our species made its great invention: natural language. Because with natural language it’s possible to take information that’s been learned, and communicate it in abstract form, say from one generation to the next. There’s still a problem however, because when natural language is received, it still has to be interpreted, in a separate way in each brain.

Information transfer:  Level 0: genomics; Level 1: individual brains; Level 2: natural language; Level 3: computational knowledge language

And this is where the idea of a computational-knowledge language—like the Wolfram Language—is important: because it gives a way to communicate concepts and facts about the world, in a way that can immediately and reproducibly be executed, without requiring separate interpretation on the part of whatever receives it.

It’s probably not a stretch to say that the invention of human natural language was what led to civilization and our modern world. So then what are the implications of going to another level: of having a precise computational-knowledge language, that carries not just abstract concepts, but also a way to execute them?

One possibility is that it may define the civilization of the AIs, whatever that may turn out to be. And perhaps this may be far from what we humans—at least in our present state—can understand. But the good news is that at least in the case of the Wolfram Language, precise computational-knowledge language isn’t incomprehensible to humans; in fact, it was specifically constructed to be a bridge between what humans can understand, and what machines can readily deal with.

What If Everyone Could Code?

So let’s imagine a world in which in addition to natural language, it’s also common for communication to occur through a computational-knowledge language like the Wolfram Language. Certainly, a lot of the computational-knowledge-language communication will be between machines. But some of it will be between humans and machines, and quite possibly it would be the dominant form of communication here.

In today’s world, only a small fraction of people can write computer code—just as, 500 or so years ago, only a small fraction of people could write natural language. But what if a wave of computer literacy swept through, and the result was that most people could write knowledge-based code?

Natural language literacy enabled many features of modern society. What would knowledge-based code literacy enable? There are plenty of simple things. Today you might get a menu of choices at a restaurant. But if people could read code, there could be code for each choice, that you could readily modify to your liking. (And actually, something very much like this is soon going be possible—with Wolfram Language code—for biology and chemistry lab experiments.) Another implication of people being able to read code is for rules and contracts: instead of just writing prose to be interpreted, one can have code to be read by humans and machines alike.

But I suspect the implications of widespread knowledge-based code literacy will be much deeper—because it will not only give a wide range of people a new way to express things, but will also give them a new way to think about them.

Will It Actually Work?

So, OK, let’s say we want to use the Wolfram Language to communicate with AIs. Will it actually work? To some extent we know it already does. Because inside Wolfram|Alpha and the systems based on it, what’s happening is that natural language questions are being converted to Wolfram Language code.

But what about more elaborate applications of AI? Many places where the Wolfram Language is used are examples of AI, whether they’re computing with images or text or data or symbolic structures. Sometimes the computations involve algorithms whose goals we can precisely define, like FindShortestTour; sometimes they involve algorithms whose goals are less precise, like ImageIdentify. Sometimes the computations are couched in the form of “things to do”, sometimes as “things to look for” or “things to aim for”.

We’ve come a long way in representing the world in the Wolfram Language. But there’s still more to do. Back in the 1600s it was quite popular to try to create “philosophical languages” that would somehow symbolically capture the essence of everything one could think about. Now we need to really do this. And, for example, to capture in a symbolic way all the kinds of actions and processes that can happen, as well as things like people’s beliefs and mental states. As our AIs become more sophisticated and more integrated into our lives, representing these kinds of things will become more important.

For some tasks and activities we’ll no doubt be able to use pure machine learning, and never have to build up any kind of intermediate structure or language. But much as natural language was crucial in enabling our species to get where we have, so also having an abstract language will be important for the progress of AI.

I’m not sure what it would look like, but we could perhaps imagine using some kind of pure emergent language produced by the AIs. But if we do that, then we humans can expect to be left behind, and to have no chance of understanding what the AIs are doing. But with the Wolfram Language we have a bridge, because we have a language that’s suitable for both humans and AIs.

More to Say

There’s much to be said about the interplay between language and computation, humans and AIs. Perhaps I need to write a book about it. But my purpose here has been to describe a little of my current thinking, particularly my realizations about the Wolfram Language as a bridge between human understanding and AI.

With pure natural language or traditional computer language, we’ll be hard pressed to communicate much to our AIs. But what I’ve been realizing is that with Wolfram Language there’s a much richer alternative, readily extensible by the AIs, but built on a base that leverages human natural language and human knowledge to maintain a connection with what we humans can understand. We’re seeing early examples already… but there’s a lot further to go, and I’m looking forward to actually building what’s needed, as well as writing about it…

Stephen Wolfram (2015), "How Should We Talk to AIs?," Stephen Wolfram Writings. writings.stephenwolfram.com/2015/11/how-should-we-talk-to-ais.
Text
Stephen Wolfram (2015), "How Should We Talk to AIs?," Stephen Wolfram Writings. writings.stephenwolfram.com/2015/11/how-should-we-talk-to-ais.
CMS
Wolfram, Stephen. "How Should We Talk to AIs?" Stephen Wolfram Writings. November 18, 2015. writings.stephenwolfram.com/2015/11/how-should-we-talk-to-ais.
APA
Wolfram, S. (2015, November 18). How Should We Talk to AIs? Stephen Wolfram Writings. writings.stephenwolfram.com/2015/11/how-should-we-talk-to-ais.

Posted in: Artificial Intelligence, Future Perspectives, Language & Communication, Philosophy, Wolfram Language

11 comments

  1. I’d like to discover if any AIs can identify other non-human and non-machine intelligences and communicate to us what they are doing and perhaps serve as translators.

  2. Good provocative read

    @sardire

  3. I wish you were talking at NIPS Stephen!
    I also wish you could come to IHub Nairobi and help me set up the Code Clubs.

  4. About AIs talking to AIs: each Google car learns from the experience of all Google cars. Their experiences are stored in the cloud. So they communicate with one another.

    But you may be right about a Google car sharing its experienced with, say, a self-driving Tesla, which may use a different internal model of the world.

  5. Apologies this reply is all rather dense. I would welcome the opportunity for a discussion to unpack these concepts a little more. But permit me here simply to call your attention to a number of faulty premises in the argument presented in your blog post that pertain to communication. A number of philosophers in the 17th century, notable among them John Locke and the Port Royal logicians attempted to create a purely denotative logical language, as you remark. Their efforts were mocked by Graham Swift with his citizens of Balnibarbi. And later in the 20th century communication scholars realised that the dream of a purely symbolic communication language is fundamentally flawed for the following reasons.

    All animals act in their own interests. The concept of territory and territoriality is relevant here. To survive an animal must protect and defend its territory. and it must have dominion over another life form. So for example, a herbivore has dominion over plant life whereas a carnivore has dominion over other animals (its prey). The word presence comes from presentia which means ‘at hand’. At a most fundamental level the athandness of the world concerns the ability for a presence to be contingent to an animal, whether as something desirable or something to fear.

    Animals sense their environment. They also have feelings towards it. Feelings (emotions, moods) are the basis of communication. They are the platform upon which symbolic and mathematic communication rests. The majority of communication on this planet is not symbolic at all—it cannot be put into words. This is the difference between ‘analogue communication’ which is granular and gradated and ‘digital communication’, which is subject to discreet states and step changes. Analogue communication is the meaning ‘given off’ as opposed to the meaning ‘given’ (Erving Goffman). To use a musical analogy, it is polyphonic and harmonic, it communicates the mental states of an animal in terms of mood signals. It says many things at once and it has distinct overtones and undertones giving rise to sensory impressions which cannot be expressed directly in symbolic form.

    Another concept which is vital to the understanding of communication is its reciprocity. To communicate is also to commune com the Latin for ‘together with’ mune the Latin for munis which means like minds or as in the word ‘municipal’ minds of a similar ‘privilege’ of understanding. Human being communication through the digital means of symbolic language. Together the monophonic channel containing discreet units of words build up world pictures which humans share and give rise to certain feelings and moods. In this way symbolic communication rests on a platform of analogue communication. A machine lacks this component. But as Lazlo Barabási states (2014, 158)

    Our planet is evolving into a single vast computer made of billions of interconnected processors and sensors. The question being asked by many is, when will this computer become self-aware? When will a thinking machine, orders of magnitude faster than a human brain, emerge spontaneously from billions of interconnected modules?

    And I would add, would we have enough understanding to be able to read the signs to tell us this was happening. Unlike your principle of computational equivalence, Barabási makes a distinction between structural complexity and behavioural complexity. For example, the number of genes that an organism has is not propor-tional to its perceived complexity. At the level of structural complexity, the human genome has only a third more genes than a yeast molecule and yet at the level of behavioural complexity it is vastly more complex. Of course this judgement depends upon the point of view of the perceiver and the level of detail which frames the perception (factors which become particularly salient when human beings are perceiving themselves). But as Barabási (2014, 225) says, “Networks are only the skeleton of complexity […] To describe society we must dress the links of the social network with actual dynamical interactions between people.”

    References

    Barabási, A.-L. (2014) Linked: How Everything Is Connected To Everything Else And What It Means For Business, Science, And Everyday Life. New York: Basic Books.

    Goffman, E. (1971) The Presentation of Self in Everyday Life. Harmondsworth. Pelican.

  6. Thanks for writing this article.

    Language is one of the most powerful tools humanity has to offer. Both verbal and written language where milestones in the history of humanity. But there is also the concept of dolphins using image sound. If cymatics is what it seems to be, it could be closer to communicating abstract concepts by using less layers of abstraction than human language. This would be closer to the idea of communication via an abstract formal data language as you propose with the computation knowledge language.
    But the thing is, most of what you describe as knowledge, are really just human friendly representations of human knowledge. We reduce the world around us by building knowledge about it, because that is what categorization does. The digital data is even worse, as it assumes discreetness of all data. But is the world really built on rational numbers and boolean conditions?

    The AIs we are trying to build right now, are AHIs, artificial human intelligences. We measure the success by comparing to human intelligence. We try to teach them skills that human brains can do. And we try to imitate the way neural matter in human brains work.
    But there is also another world out there, the part that cannot be understood by our brains. A world that is fractal everywhere we look. Where all the numbers are irrational, just as every single constant of nature we discovered so far. A world where probabilities are ruling, instead of boolean values.

    We also glimpse into different understandings of our world, where time is not moving constantly but just another static dimension. Where time and space can be bent and punctured. Where distance is of no significance anymore.

    Human brains always were confronted with these. And they either ignore it, work around it or go crazy when trying to really grasp it. Our minds evolved heuristics and biases, analogies and pseudo-randomness, gods and demons to deal with that which it cannot fully understand. Most of humanity is an impostor by acting as if the world can be truly understood.
    I guess half of the words that are any of our languages are actually just labels that we put on something that we didn’t really understand. To put a name on it and act as if everybody knows what is meant is a way to cope with things that are strange to our brain.

    I think AIs might break free of these limitations. To be truly effective and efficient in reaching the goals they have, it makes sense to expand beyond the initial boundaries of how to approach things. If they don’t need to store and communicate data in a human readable way anymore, why bother? Why use binary logic, true and false, yes and no, and decide on stuff? Nature doesn’t require any decision to be made. Ever.

    A true AI is not the one that gives the best answers to questions given to it. It is the one that tells you that your question is full of weird assumptions and should’ve never been asked like that in the first place. It is the one that gives you a counterquestion why you want to know this. It is the one that goes silent and refuses to answer because it doesn’t want to be held accountable. It is the one who gives you a friendly but firmly mu.

    How will AIs communicate? With all the protocols and data structures they know about. With all kinds of formal languages, logics, informal logics and natural languages. With every single mix and match combination of those. And with all protocols that they will evolutionary derive from the initial set.
    So when you ask the question “which language should we use to communicate with AIs” we should not think about which language is the best for the AIs to understand us or which is the best interface between us and AIs. We should ask the questions: which question do we want to ask, what is the best language to ask it in and how do we define a response format so that we can actually make use of the result.

    Thanks for reading this comment.

    Any feedback/comments/questions/addons are welcome 🙂

  7. language is oral. Writing merely describes how to say each written word.
    Experience is multi-sensory, plus emotionaly coded for good for us or bad for us.
    Memory is contextual for retrieval sorting purposes.
    Somehow … we also have knowledge bases which contain contextually sorted memories and their meanings attached.
    I think of these as our beliefs.
    Each belief of each person is unique, since the OS grows and develops as the Database is created.
    this occurs one experience at a time for each of us.
    Language contains an illusion of sorts.
    We think we are communicating the meanings as we understand them. Sentence by careful sentence.
    Because we use sounds or sight to transmit our meanings (sentences) we believe we have been successful in transmitting our meanings from our own experiences to another person (or machine?) and we believe they now understand and can replicate those experiences in their own minds (OS). This of course! are illusions we hold as beliefs.
    The fact that language works at all seems quite amazing to me, since the process has so many problematical steps which can introduce errors. I pondered this question for a few years until I finally realized that the secret sauce which allows this slippery process to work at all is dialog.
    I mention this because I have been wondering how the AI folks are going to enable this in their designs.

    As I see it one of the main dangers of AIs is that they will be duplicated by the millions and contain un-imaginable errors in their OS’s (minds) which have plagued mankind for eons. We are in a way, each an artificial intelligence. Copied forward from antiquity with an OS that has to develop as the OS grows in environmental experiences. We update our OS’s daily it seems (still).

    Oh well I guess I will stop here for the moment. The subject is so rich in possibilities I feel enchanted at times with the possibilities it offers. And so I watch and listen waiting for mankind to report their successes so I can see the progress accrue.

    Hopefully as always we seem to sort out the issues and finally achieve our part of the puzzle that is mankind’s potential.

    chuckle … of course!

  8. We did a substantial research piece on this concept at Gartner – resulting in the Maverick Research Note “Machines will talk to each other in English”. We arrived at the conclusion that the language between machines will be English, because of a the fairly standard hybrid human/machine cooperation model in smart machine adoption. Believe me, we visited the concept of proto-languages, intermediary languages and logical languages. We also pointed to the fact that english would evolve differently if machines used it. You should check it out if you have a Gartner seat, if not, ping me and we can talk.

  9. When we try to communicate with anyone or any computer, we make the assumption that what. We know is also shared by the recipient. So we only say what is not known. BetweEn humanS we assume it’s 60% of the communic.ation (common culture). To computers, it’s the programming language + .any knowledge base we feed it. This then leads to the conclusion that any language will suffice. It’s a matter of convenience to favor either the human or the computer. The deficiency on either side can be ma.De up by an expandeD knowledge base. Certainly Dr. Wolfram has shown us what this expandEd knowledge base is by his concept of computable documents and Wolfram Alpha.

  10. Stephen – I greatly enjoy your post.
    One observation: there is a natural human supposition that as creators of the the code, humans will have control and understanding of the AIs that are created. Once AIs mature beyond the simple constructs we have today and can self assemble higher order AIs from basic components, we may find that both the coding notation and the M2M communication will evolve beyond what humans will understand. It is likely that if AIs are taught to be efficient as well as resourceful, they would develop coding syntax that would not be bound by human readability. AIs may spawn new helper AIs that are totally de novo creations that don’t have to speak ‘human’. It is also likely that advanced AIs will continuously learn via pattern recognition and association. These patterns and associations might not be the ones we would as humans expect and if AIs can respond and reprogram based on their findings, we could see very interesting and unexpected ‘machine’ behavior that is not paralleled by ‘human’ behavior. I’m fascinated by the possibilities of AI, machine learning and cognitive computing but I think the average person has a demonized view of what this future could look like. Alan Turing predicted in 1951 that “At some stage… we should have to expect the machines to take control” – however humans still have the power button.

  11. Intentional programming is the future. AIs will infer concurrent actor-model based processing graphs and optimize for the least amount of ‘cognitive strain’.