
Today’s is a bit of a different read as after I listened to This Week In Startups and a podcast on Artificial Intelligence I got curious as to what’s developed in the past couple of years and what the future looks like. More on that later, though, first your weekly digest of what’s interesting (at least to me):
First, if you’ve not heard of it, the President of Brazil was impeached last week by her congress. While it may seem trivial or inconsequential to some (we think back to Bill Clinton’s impeachment in the 90s), for the world it has serious implication, some of which are explored by Marketwatch.
Speaking of leadership and issues around it, HBR has a good article this week that asks the question “why do so many incompetent men become leaders?” It’s interesting to note that the mythical image of a leader aligns with narcissism, psychopathy, histrionics, and Machiavellianism and how many male leaders that have embodied those traits. Strategy+business also had two interesting articles this week, one on how to design a team to deliver powerful capabilities and another on the obsolescence of traditional organizational structures.
Last week I had a few articles about how Facebook has supposedly been squelching conservative news from people’s feeds. This week, the New York Times has an interesting take on the topic, including that there was a whole bunch of hub bub over nothing, for even if the editor’s at Facebook were biased against conservative news sources, the “Trending Topics” section was so small that it was inconsequential (it’s practically invisible in the mobile UI). The issue more lies in the algorithms that determine what users see in their feeds.
Also on Facebook, here’s a look the Book’s attempt to bring millions of Indians to the internet and how it failed. Oh, and I’d be remiss if I didn’t include a good summary of everything Google discussed at its i/o conference.
Since last year’s market peak, Apple has lost on quarter of a billion dollars of market capitalization, and even with the recent influx from Warren Buffet’s Berkshire Hathaway, the stock is still hovering at $94 per share, versus the market high of $132. The Conversation this week posits that Apple has gone from being a disruptor to being the disrupted, with Apple losing momentum and direction on what the next “one more thing” may be.
If there’s one trend we’ve been seeing, it’s that personalized medicine is the next big thing when it comes to mobile health. There’s an estimated $42 billion market waiting to be opened up and a mad dash to do so. Techcrunch this week has a good read on how that blue ocean is going to be regulated and monitored by a partnership between the FDA and the FTC. That said, there is and has been a flurry of activity when it comes to personal health apps. In talking to friends of mine in the startup space, they say that all those regulations from the government get in the way. It appears like the partnership between the FDA and the FTC is intended to make that ocean more accessible. Speaking of startups, though, here’s an interesting article from someone I know from my days in Boulder that poses the question “do startups have a drinking problem?”
In a bit of a turn of events, it appears that emerging economies are turning out to be early tech adopters. The World Economic Forum terms this as the Fourth Industrial Revolution and MIT Technology Review has a good overview of the how and the why. The technology most driving this change? The mobile phone, which shouldn’t be a shock to anyone. Also from MIT is an article on how wireless, super-fast internet access is coming to our homes.
For those of you who have paid attention to Zappos and the organizational structure there, you’ll know a little something about Holocracy. For those that don’t, Holocracy is a system of organizational governance in which authority and decision-making are distributed throughout a holarchy of self-organizing teams rather than being vested in a management hierarchy. Well, it seems that system may not be living up to its hype. In fact, it may be time to put a nail in that coffin.
As a bit of a segue into the world of Artificial Intelligence, there’s an interesting article this week about how companies are trying to be more human. In a similar vein to our introduction to Viv last week, Google, Amazon, Microsoft, and even Facebook are augmenting their personal assistant efforts. While there may be some question around privacy, AI, and Personal assistants, bots are pushing ahead in this area as well. Brands attempt to act like people as well, engaging with and talking to consumers on social media. We see that extended in the TV show Community where, in order to open a franchise on campus, Subway enrolls as a student in the school. The next step, according to the author, is for Google to install an always-on device that listens and analyzes everything you say, allowing Google to become even more attached to your life. And if that turns your stomach, you’ve not seen anything yet.
Artificial Intelligence. I’ve shared a few articles about this space over the months but was listening to a podcast earlier this week about Vicarious and I thought it would be good to explore a bit the different types of AI together and end with a question: what happens when we finally achieve artificial general intelligence and have massive computational power behind it.
So what is AI? Many of us think of the Hollywood-ized version of AI when we think of it: everything from Teminator to Star Wars and a whole lot in between. The reality is that AI is already everywhere, although not in those fanciful ways. It ranges from your phone’s calculator to self-driving cars to something in the future that might change the world dramatically. AI refers to all of these things, which is confusing. We use it all the time in our everyday lives but we likely don’t even realize it. John McCarthy coined the term AI in 1956, and as he did he complained that once it works, it’s how things have always worked and we don’t acknowledge it as AI anymore. Because of this, AI often sounds like a mythical future prediction more than a reality. At the same time, it makes it sound like a pop concept from the past that never came to fruition.
To clear it up a bit, I’m going to talk about Artificial Narrow Intelligence (what we have today) and Artificial General Intelligence (what companies like Vicarious are trying to achieve), and Artificial Super Intelligence. Along the lines of clearing things up, it is good to note that AI doesn’t mean robots. Robots are the shell that holds the AI, the ocntainer – think the latest Avengers movie with Ultron occupying Ironman armor. AI is the brain, and the robot is its body—if it even has a body. For example, the software and data behind Siri is AI, the woman’s voice we hear is a personification of that AI, and there’s no robot involved at all.
Secondly, you’ve probably heard the term “singularity” or “technological singularity.” This term has been used in math to describe an asymptote-like situation where normal rules no longer apply. It’s been used in physics to describe a phenomenon like an infinitely small, dense black hole or the point we were all squished into right before the Big Bang. Again, situations where the usual rules don’t apply. In 1993, Vernor Vinge wrote a famous essay in which he applied the term to the moment in the future when our technology’s intelligence exceeds our own—a moment for him when life as we know it will be forever changed and normal rules will no longer apply.
So back to those catergories of AI: Narrow, General, and Super:
Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. There’s AI that can beat the world chess champion in chess, but that’s the only thing it does. Ask it to figure out a better way to store data on a hard drive, and it’ll look at you blankly.
Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can. Creating AGI is a much harder task than creating ANI, and we’re yet to do it. Professor Linda Gottfredson describes intelligence as “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.” AGI would be able to do all of those things as easily as you can.
Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” Artificial Superintelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter—across the board. ASI is the reason the topic of AI is such a spicy meatball and why the words “immortality” and “extinction” will both appear in these posts multiple times. It also may be able provide the answer to the ultimate question of life, the universe, and everything.
Currently we are in a world of Artificial Narrow Intelligence. Artificial Narrow Intelligence is machine intelligence that equals or exceeds human intelligence or efficiency at a specific thing. A few examples:
- Cars are full of ANI systems, from the computer that figures out when the anti-lock brakes should kick in to the computer that tunes the parameters of the fuel injection systems. Google’s self-driving car, which is being tested now, will contain robust ANI systems that allow it to perceive and react to the world around it.
- Your phone is a little ANI factory. When you navigate using your map app, receive tailored music recommendations from Pandora, check tomorrow’s weather, talk to Siri, or dozens of other everyday activities, you’re using ANI.
- Your email spam filter is a classic type of ANI—it starts off loaded with intelligence about how to figure out what’s spam and what’s not, and then it learns and tailors its intelligence to you as it gets experience with your particular preferences. The Nest Thermostat does the same thing as it starts to figure out your typical routine and act accordingly.
- You know the whole creepy thing that goes on when you search for a product on Amazon and then you see that as a “recommended for you” product on a different site, or when Facebook somehow knows who it makes sense for you to add as a friend? That’s a network of ANI systems, working together to inform each other about who you are and what you like and then using that information to decide what to show you. Same goes for Amazon’s “People who bought this also bought…” thing—that’s an ANI system whose job it is to gather info from the behavior of millions of customers and synthesize that info to cleverly upsell you so you’ll buy more things.
- Google Translate is another classic ANI system—impressively good at one narrow task. Voice recognition is another, and there are a bunch of apps that use those two ANIs as a tag team, allowing you to speak a sentence in one language and have the phone spit out the same sentence in another.
- When your plane lands, it’s not a human that decides which gate it should go to. Just like it’s not a human that determined the price of your ticket.
- The world’s best Checkers, Chess, Scrabble, Backgammon, and Othello players are now all ANI systems.
- Google search is one large ANI brain with incredibly sophisticated methods for ranking pages and figuring out what to show you in particular. Same goes for Facebook’s Newsfeed.
ANI systems as they are now aren’t especially scary. At worst, a glitchy or badly-programmed ANI can cause an isolated catastrophe like knocking out a power grid, causing a harmful nuclear power plant malfunction, or triggering a financial markets disaster (like the 2010 Flash Crash when an ANI program reacted the wrong way to an unexpected situation and caused the stock market to briefly plummet, taking $1 trillion of market value with it, only part of which was recovered when the mistake was corrected).
So what will it take to get us from ANI to AGI? Well, that’s a tough one. Nothing will make you appreciate human intelligence like learning about how unbelievably challenging it is to try to create a computer as smart as we are. Building skyscrapers, putting humans in space, figuring out the details of how the Big Bang went down—all far easier than understanding our own brain or how to make something as cool as it. As of now, the human brain is the most complex object in the known universe.
What’s interesting is that the hard parts of trying to build AGI (a computer as smart as humans in general, not just at one narrow specialty) are not intuitively what you’d think they are. Build a computer that can multiply two ten-digit numbers in a split second—incredibly easy. Build one that can look at a dog and answer whether it’s a dog or a cat—spectacularly difficult. Make AI that can beat any human in chess? Done. Make one that can read a paragraph from a six-year-old’s picture book and not just recognize the words but understand the meaning of them? Google is currently spending billions of dollars trying to do it. Hard things—like calculus, financial market strategy, and language translation—are mind-numbingly easy for a computer, while easy things—like vision, motion, movement, and perception—are insanely hard for it.
What you quickly realize when you think about this is that those things that seem easy to us are actually unbelievably complicated, and they only seem easy because those skills have been optimized in us (and most animals) by hundreds of millions of years of animal evolution. When you reach your hand up toward an object, the muscles, tendons, and bones in your shoulder, elbow, and wrist instantly perform a long series of physics operations, in conjunction with your eyes, to allow you to move your hand in a straight line through three dimensions. It seems effortless to you because you have perfected software in your brain for doing it. Same idea goes for why it’s not that malware is dumb for not being able to figure out the slanty word recognition test when you sign up for a new account on a site—it’s that your brain is super impressive for being able to.
One thing that definitely needs to happen for AGI to be a possibility is an increase in the power of computer hardware. If an AI system is going to be as intelligent as the brain, it’ll need to equal the brain’s raw computing capacity. The second key to creating AGI is to make it smart. There pretty much are three ways to do this: plagiarize the human brain, leverage evolution through simulation, or to make the whole thing a computer’s problem, not ours. For the middle one, we’d leverage a method called “genetic algorithms” which would work something like this: there would be a performance-and-evaluation process that would happen again and again (the same way biological creatures “perform” by living life and are “evaluated” by whether they manage to reproduce or not). A group of computers would try to do tasks, and the most successful ones would be bred with each other by having half of each of their programming merged together into a new computer. The less successful ones would be eliminated. Over many, many iterations, this natural selection process would produce better and better computers. The challenge would be creating an automated evaluation and breeding cycle so this evolution process could run on its own.
The downside of copying evolution is that evolution likes to take a billion years to do things and we want to do this in a few decades.
But we have a lot of advantages over evolution. First, evolution has no foresight and works randomly—it produces more unhelpful mutations than helpful ones, but we would control the process so it would only be driven by beneficial glitches and targeted tweaks. Secondly, evolution doesn’t aim for anything, including intelligence—sometimes an environment might even select against higher intelligence (since it uses a lot of energy). We, on the other hand, could specifically direct this evolutionary process toward increasing intelligence. Third, to select for intelligence, evolution has to innovate in a bunch of other ways to facilitate intelligence—like revamping the ways cells produce energy—when we can remove those extra burdens and use things like electricity. It’s no doubt we’d be much, much faster than evolution—but it’s still not clear whether we’ll be able to improve upon evolution enough to make this a viable strategy.
The thing is, all of this could happen now. Rapid advancements in hardware and innovative experimentation with software are happening simultaneously, and AGI could creep up on us quickly and unexpectedly.
I’ll leave the question of ASI out there simply because we don’t know enough yet to even really conceptualize it – we need to get to AGI before we can truly understand what is possible with ASI. Perhaps Roddenberry will have been right all along with his vision from the 60’s. If you want to get up to speed on how to converse in AI, check out this resource from of all places the BBC, or check out this article from The Verge for more. Also, check out that podcast I mentioned to get some answers to what might be instore for us from AGI.
To end this week there are two TED talks on the AI: one from Nick Bostrom on what happens when our computers get smarter than we are and another from Jeremy Howard on the wonderful and terrifying implications of computers that can learn.