Ghost in the Well

Thoughts on AI

The Ghost in the Well

Is it time to update the Turing Test?

Instead of communicating through a computer, what if we had someone stranded in a well, so dark and so deep we couldn’t see them. So far down we could only hear their cries, but not their words.

If we could only communicate by sending the water bucket up and down, how could we know if they were human, or artificial?

What level of human-seeming do we need to send the rescue works down into the depths? Are they hurt? can we tell with limited information, when would we call the diggers in to tear apart the well and rescue them?

And if we believed them fake? Would we leave them there? Return to discuss things with them? Strand them only with the one point of contact, staring up to the point of light that might be sky, sun or moon?

—Stacie - May 22nd 2017

Documentation for Mycroft Core
If you ask 100 people for the definition of "artificial intelligence," you'll get at least 100 answers, if not more. At AWS, we define it as a service or system which can perform tasks that usually require human-level intelligence such as visual perception, speech recognition, decision making, or translation.
You've been logged out of GDC Vault since the maximum users allowed for this account has been reached. To access Members Only content on GDC Vault, please log out of GDC Vault from the computer which last accessed this account. Click here to find out about GDC Vault Membership options for more users.
Like a lot of people, we've been pretty interested in TensorFlow, the Google neural network software. If you want to experiment with using it for speech recognition, you'll want to check out [Silicon Valley Data Science's] GitHub repository which promises you a fast setup for a speech recognition demo.
Four years ago, Google was faced with a conundrum: if all its users hit its voice recognition services for three minutes a day, the company would need to double the number of data centers just to handle all of the requests to the machine learning system powering those services.
Sonnet is a new open source library announced by Alphabet's DeepMind. It is built on top of their existing machine learning library TensorFlow along with extra features that fit DeepMind's research requirements. Sonnet is designed to make it easier to create complex neural networks using TensorFlow.
Aditya Tiwari / Fossbytes
In the fine tradition of turning lemons into lemonade or dead horses* into dog chow, an Austin firm has found a way to grind something positive from the commander in chief's more negative words for business. When President Donald Trump blasts businesses on Twitter, millions may pay attention, but the " Trump and Dump Bot" can turn a profit in under a second.
Janet Burns / Forbes
Google Photos' intrepid Assistant just wants to help. Part of the Android-maker's photo app, it helps organize the thousands of photos stored on your phone. It can make little albums of places you went based on geolocation data, and through facial recognition can even organize albums about your friends, family and pets through.
Google has announced their soon to be available Vision Kit, their next easy to assemble Artificial Intelligence Yourself (AIY) product. You'll have to provide your own Raspberry Pi Zero W but that's okay since what makes this special is Google's VisionBonnet board that they do provide, basically a low power neural network accelerator board running TensorFlow.
I'll be visiting the UK next week! You can join me in London from September 30-October 2 (at New Scientist Live, The Royal Institution, and Intelligence Squared), then from October 3-6 in York, Edinburgh, Cambridge and Ely, Oxford, and Cheltenham.