When your machines talk together, What will they say about you?

As the rest of the century unfolds you will see machines and artificial Artificial Intelligence units talking among themselves more and more until that chatter becomes a persistent tapestry in the background of our lives that we only rarely hear or notice. When the machine chatter becomes nearly ubiquitous, what will our machines be saying about us?

We can speculate on some of the near horizon M2M apps, but what Machine to Machine chatter will be like mid century and beyond is really impossible to predict. There’s already a lot of M2M out there but now it usually takes human request through push of button, click of app, or other trigger to initiate the conversation.

In the future it will most likely happen seamlessly and automatically. Sensors will detect our presence, temperature, vital signs, position, and everything else measurable. Timers will know when to trigger events, and meters will measure our personal context for events that unfold around us almost completely without our direction. Context will be things like: Are you standing, sitting, laying, moving, or still? What time is it? When did you last eat? What thing did you last do, choose, look at, move toward? What did you last say and what motions did your hand make?

As making machines smart and then smarter becomes a matter of pennies, and as having them talk together wirelessly and automatically also becomes cheap, most companies will open their devices and they will tart them up with artificial artificial intelligence (AAI.*) This will escalate and accelerate for the rest of century until most devices are talking incessantly to most other devices, sensors, and network devices like Chatty Cathy dolls on a loop. Initially companies will do this to leapfrog the competition but later it will be absolutely necessary if they want to stay in business.

Let’s start with a few simplex assumptions and build upon them to get to an example that’s simple, easy, and really doable tomorrow. Machines do not have artificial intelligence yet,  so what starts the conservation between them? It will likely take artificial artificial intelligence (AAI,*) or programs with contextual triggers, thresholds, or control zones that are also able to search and query databases both at home and online.

What will the conversation be about? That will be dependent on the devices talking, the context, and the amount of artificial artificial intelligence (AAI*) built in. Context might consist of time of day, location, position, weather, and data about you — like when you last ate, whether you are standing, sitting, or laying down, who else is present, what songs you like, which artists you like, who your family and friends are, what they like, etc. etc. etc.

So here’s an easy example, all the parts exist but none talk together yet. You drive up to your house – your garage is opened by you pressing a button, and you manually switch off the radio. In a fully developed M2M environment things could be different. Your car could signal ahead that you were coming once it recognized that you were near your house with it’s GPS. The Garage door opener could periodically query the camera out front and your car’s until it got short distance confirmed recognition of both your car and your face. It could then open the door, and turn on the speakers in the garage. As you exit the car after pulling into the garage the speakers in the garage play the song, station, or stream that you were listening to in the car. The Garage door opener signals the TV in your living room to turn on and tune in the news channel you usually watch this time of day, and etc. etc. etc. Everything in this example could be kludged together today with existing tech, but since M2M is definitely our future it really does behoove us to create some open standards around it.

Perhaps the FCC should investigate M2M only spectrum, and Universities should look at standards for security and protection (E.G. if our machines are going to know everything about us then they damned well better be secured with biometrics and much more.)

 

* Artificial Artificial intelligence is when masses of data regarding human or individual preferences or masses of actual crowd sourced human input is sifted to get to best choice or a range of best choices – it’s crowd sourcing, history sourcing, or both to find a few best choices: examples are Google and other search engines with their history of other people’s preferences when searching your search term. Others are the “you might also like” recommendation engines at Netflix and Itunes where they use the history of your preferences and look for like, and another is Mechanical Turk at Amazon.com where they put the question to hordes of people on the net to get best answer. This is a term I first heard from Jonathan Zittrain in a Berkman center lecture and I am stealing the term because it’s so appropriate. Bottom line it looks like the machine is thinking and possibly prescient, but it’s really neither — it’s all done through brute force or lots of human responses sifted through context.