Ghost in the Well

Thoughts on AI

The Ghost in the Well

Is it time to update the Turing Test?

Instead of communicating through a computer, what if we had someone stranded in a well, so dark and so deep we couldn’t see them. So far down we could only hear their cries, but not their words.

If we could only communicate by sending the water bucket up and down, how could we know if they were human, or artificial?

What level of human-seeming do we need to send the rescue works down into the depths? Are they hurt? can we tell with limited information, when would we call the diggers in to tear apart the well and rescue them?

And if we believed them fake? Would we leave them there? Return to discuss things with them? Strand them only with the one point of contact, staring up to the point of light that might be sky, sun or moon?

—Stacie - May 22nd 2017

Documentation for Mycroft Core
If you ask 100 people for the definition of "artificial intelligence," you'll get at least 100 answers, if not more. At AWS, we define it as a service or system which can perform tasks that usually require human-level intelligence such as visual perception, speech recognition, decision making, or translation.
You've been logged out of GDC Vault since the maximum users allowed for this account has been reached. To access Members Only content on GDC Vault, please log out of GDC Vault from the computer which last accessed this account. Click here to find out about GDC Vault Membership options for more users.
Like a lot of people, we've been pretty interested in TensorFlow, the Google neural network software. If you want to experiment with using it for speech recognition, you'll want to check out [Silicon Valley Data Science's] GitHub repository which promises you a fast setup for a speech recognition demo.
Four years ago, Google was faced with a conundrum: if all its users hit its voice recognition services for three minutes a day, the company would need to double the number of data centers just to handle all of the requests to the machine learning system powering those services.
Sonnet is a new open source library announced by Alphabet's DeepMind. It is built on top of their existing machine learning library TensorFlow along with extra features that fit DeepMind's research requirements. Sonnet is designed to make it easier to create complex neural networks using TensorFlow.
Aditya Tiwari / Fossbytes
In the fine tradition of turning lemons into lemonade or dead horses* into dog chow, an Austin firm has found a way to grind something positive from the commander in chief's more negative words for business. When President Donald Trump blasts businesses on Twitter, millions may pay attention, but the " Trump and Dump Bot" can turn a profit in under a second.
Janet Burns / Forbes
"I can't go on," his brother said. Tsuyoshi Shimizu looked thoughtfully into the screen of his pasokon. His older brother's face was shiny with sweat from a late-night drinking bout. "It's only a career," said Tsuyoshi, sitting up on his futon and adjusting his pajamas. "You worry too much."
Within three years deep learning will change front-end development. It will increase prototyping speed and lower the barrier for building software. The field took off last year when Tony Beltramelli introduced the pix2code paper and Airbnb launched sketch2code. Currently, the largest barrier to automating front-end development is computing power.
If you're like me, then you'd do pretty much anything to have your own R2-D2 or BB-8 robotic buddy. Just imagine the adorable adventures you'd have together! I'm delighted to report that the Anki Cozmo is the droid you've been looking for. Cozmo is big personality packed into a itty-bitty