Artificial General Intelligence, Conversational AI, and Blockchain

How do AI, blockchain, trust, and "smart machines" intersect? For this exciting episode of CXOTalk, industry analyst, Michael Krigsman, speaks with serial entrepreneur and inventor Peter Voss.

44:49

Jun 29, 2018
5,702 Views

How do AI, blockchain, trust, and "smart machines" intersect? For this exciting episode of CXOTalk, industry analyst, Michael Krigsman, speaks with serial entrepreneur and inventor Peter Voss.

Peter started out in electronics engineering, but quickly moved into software. After developing a comprehensive ERP software package, Peter took his first software company from a zero to 400-person IPO in seven years.

Fueled by the fragile nature of software, Peter embarked on a 20-year journey to study intelligence (how it develops in humans, how to measure it, and current AI efforts), and to replicate it in software. His research culminated in the creation of a natural language intelligence engine that can think, learn, and reason -- and adapt to, and grow with the user. He even coined the term ‘AGI’(Artificial General Intelligence) with fellow luminaries in the space.

Peter founded SmartAction.ai in 2009, which developed the first AGI-based call center automation technology. Now, in his latest venture, Aigo.ai, he is taking that technology a step further with the commercialization of the second generation of his ‘Conversational AI’ technology with a bold mission of providing hyper-intelligent hyper-personal assistants for everyone.

Transcript

Michael Krigsman: With all of this talk about artificial intelligence, what gets lost is that AI is not something monolithic. It's not a monolithic thing. Today, on Episode #295 of CxOTalk, we are going to explore some different types of artificial intelligence. In particular, something called artificial general intelligence. I'm Michael Krigsman. I'm an industry analyst and the host of CxOTalk.

Now, before I introduce our guest for today, I want you to tell your friends, and I want you to tell them to watch right now and also please subscribe on YouTube. Hit the subscribe button.

Today, I'm joined by Peter Voss, who is the CEO and the chief scientist at Aigo.ai. It's a very interesting artificial intelligence company. Peter is a serial entrepreneur, and he is a pioneer in this field of artificial general intelligence.

Peter Voss, welcome. It's so great for you to be here on CxOTalk.

Peter Voss: Well, thanks for having me, Michael.

Michael Krigsman: Peter, very briefly tell us about your background.

Peter Voss: Yes. I'll elaborate a bit more to give some context. I actually started in electronics engineering. I had my own electronics company doing microprocessor control stuff. But then, I fell in love with software, and my company turned into a software company.

I developed an ERP package and built a company around that. We went from zero to 400 people. We did an IPO, and that was fantastic. I've love to do that again.

The thing, when I sold my interest in the company, trying to decide what I really wanted to do, what struck me is how dumb software is. I was very proud of my own software but, still, whatever the program, it didn't anticipate, didn't think. It really doesn't have any common sense.

I spent five years just studying intelligence, different aspects of intelligence: How do we measure it? Even going back to philosophy: What is reality and how do we know anything? How are we certain of things? How do children learn? How is intelligence different? Is human intelligence different from animal intelligence? And, of course, everything that had been done in AI.

Then, as a culmination of those studies, I started my first AI company in 2001. Since then, I've been working in that field.

Michael Krigsman: Okay. You have a very broad background. Tell us about Aigo.ai. What are the kinds of problems that this company is looking at?

Peter Voss: In Aigo.ai, and our product Aigo, is an intelligence engine, a conversational intelligence engine that can be deployed in many different modalities. It could be the brain of a robot if you want to have a conversation, what's an intelligent ongoing conversation. Or, it could be a personal assistant. We could talk more about the sort of commercialization aspects later but, essentially, think of it as a brain, an artificial brain that you can have a real conversation with that remembers what you said, that makes sense of what you're saying, and so on.

Michael Krigsman: Now, where is the intersection between this company and artificial general intelligence, which you're one of the pioneers of? How do they intersect?

Peter Voss: Yes. In 2001, when I started the company, actually, three of us--Ben Goertzel, Shane Legg, and myself--coined the term "artificial general intelligence," AGI, which has now actually become quite widely used. What AGI really stands for is the original dream of the field of AI.

The field of AI has been around for more than 60 years, and the pioneers, the initial pioneers in AI, really were interested in building a thinking, learning machine, a machine that can think and learn the way humans do. But, this proved to be really, really difficult. So, over the decades, people have really changed their ambition and said, "We can't solve that problem yet, so let's solve little problems or narrow problems," like playing chess is obviously one of the famous ones. But, it could be optimizing container placement or traffic, or it could be medical diagnosis. It could be any of those.

What really happened, instead of people building machines that can think and learn by themselves, it's taking the programmer's intelligence and turning that into code to solve a particular problem. Over the decades, AI has really lost its way. It just turned out to be too hard.

In 2001, a group of us got together to write the book to recapture the original dream and ideal of AI, of a general intelligence, and that's what this is all about. My effort for the last 20-plus years has really been to make that happen, to build that. The products we've built that we've developed, initially in R&D phase and then in commercialization, are inherently artificial general intelligence based. Now, we're still a long way from human-level intelligence, but the modality that a system can actually learn and reason is key to all of our developments and products.

Michael Krigsman: Now, for those of us who are not experts in the various types of AI and machine learning, can you draw the distinction between AI, machine learning, and artificial general intelligence as you're developing it?

Peter Voss: In fact, DARPA gave a presentation last year where they introduced the term or used the term "The Third Wave of IA." What they mean by that is the first wave of AI was sort of the traditional flowchart type programming. That's really all the programming in AI that has happened over the decades with various logic approaches, some statistical approaches, and so on but, essentially, programming.

The second wave hit us like a tsunami with machine learning, about five, six years ago. Eventually, there was enough data and processing power that you could just throw a lot of data at a program that could then build a model and do very impressive categorization, recognition, and so on. We've seen that in, especially, speech recognition, in translation, in image recognition, use for cars, and medical diagnosis. If you have a lot of data and you can use the data to build the model, you can then do categorization and prediction. That's the second wave.

Right now, almost everybody talking about AI is actually referring to this big data, machine learning, deep learning, whatever terms you want to use. But, people are also recognizing, and what DARPA said, we need to move beyond that. We need systems that can actually learn immediately, interactively in the field, that can reason, that can explain themselves, and that's what they call the third wave. That's really what our approach has been since 2001.

Michael Krigsman: From a technology standpoint, from a development standpoint, you mentioned that you're trying to--I'm paraphrasing. Correct me if I'm wrong--essentially, encode human thought, human logic. My question then is, can you give us a kind of layman's explanation as to how you do that?

Peter Voss: Yes. The fundamental distinction between narrow AI that we say is a program or the data scientist who really figures out what is the problem we're trying to solve and how do I solve it, and then they write code, or they collect data, and they tweak the network, the model that they're trying to build. It's the engineer's intelligence that is being turned into code directly to solve a problem.

Whereas, the approach with artificial general intelligence is to take our intelligence and build the system that can learn and reason by itself. You're trying to have the intelligence in the system itself rather than taking human intelligence and turning it into code - if that makes sense.

Michael Krigsman: Can you give a concrete example when you say, "Have the intelligence built into the system"? I think many of us are familiar now with machine learning, and so how is this different? How is the approach different?

Peter Voss: Right. Fundamentally, what you need for a truly intelligent system, if you just think of when you would consider an animal or human to be intelligent, that there's somebody at home, basically. [Laughter] There are certain basic requirements. You would expect the system to be able to remember what you said before, so memory is important. To truly understand what you're saying and to be able to integrate whatever they hear and use that information to learn interactively, that's really a key to it. To use context; that the same input that you get, whether it's text, vision, or whatever, depending on what the context is, what you're trying to achieve, who you're talking to, what your goals are. Interactive learning, context, memory, remembering things, reasoning, all of those things are key to real intelligence.

Michael Krigsman: Now, does this type of intelligence rely upon very, very large datasets? As you explained to me earlier when we were talking, AI has its roots going back way before "big data." Again, where does data fit in, and how is this different from machine learning, as we know it?

Peter Voss: Right. Machine learning, or the second wave in DARPA's parlance really focuses entirely on big data and big computation. That's one approach.

Clearly, as humans, when we learn, as well, we are exposed to a lot of data, as we interact with the world. That is important as a baseline background information that you need for an intelligent system. But, I think it's important that it isn't all of your focus that goes there.

The real focus of intelligence is that interactive learning. For example, a child. They can just see a giraffe, a picture of a giraffe, for the first time. They'll immediately be able to recognize giraffes. It's distinctive enough.

Now, with machine learning, you need to give it thoughts of examples of counterexamples and so on. There isn't that immediate recognition and learning, what's called one-shot learning that you can just get one instance. Or, somebody gives you a piece of information, a single sentence, that immediately that information becomes available.

Big data has a role to play in terms of, especially, things like vision. You need to be exposed to a lot of background information and reference points that you need to learn. But, once you have that, you need to be able to learn instantaneously.

Michael Krigsman: What about the technology for developing this? It sounds like it's almost like thinking machines. I'm sure you remember; I remember there was a company called Thinking Machines here in Cambridge many, many years ago. But, it kind of has the ring of science fiction. So, in terms of the realism scale and delivering results, where are we today on that continuum?

Peter Voss: Right. What we've been doing and what we believe the third wave of AI really is about is to build a cognitive architecture. There are actually quite a few people who share that sentiment. In fact, the head of Google's DeepMind said, "We need to look more at the way the brain works to achieve real intelligence," and I think I agree with that. Maybe not the brain, but the way our mental processes work.

We need a cognitive architecture. We need a system that inherently has short-term memory, long-term memory, reasoning, context, metacognition, all of these things, capabilities for all of these things in an integrated way, in an integrated, functional way. I think that's the way to go about it.

Geoff Hinton, the sort of godfather of deep learning, recently said, "We should throw it all out and start again from deep learning because of this big data backpropagation that you're basically building a model in the factory and then the model is read-only. It can't adapt in the real world. We need to get away from that. The cognitive architecture that inherently has the ability to learn interactively, I think, is absolutely crucial.

Where are we? Well, the thing is, big data machine learning has been so successful in so many areas. For example, for Google to improve their click rate by 0.1% is probably worth billions. That's what they have. They have a lot of data. They have a lot of compute power, and all the big companies have. That's the hammer they have, so everything looks like a nail. For them, yes, if they can improve, incrementally improve, those kinds of services in their advertising revenue or Amazon, their sales, targeting sales, then it's really valuable to them.

The problem is, the success of deep learning machine learning has sucked all of the oxygen out of the air as far as AI development is concerned. If you want to do a Ph.D., it has to be in machine learning or deep learning. If you want to get paid the big bucks, that's what you want to be in, in a field.

It's really hard to get traction in the field of cognitive architectures. We've just been persisting on improving that technology over the years. We've made some really good progress with that. It just needs more effort.

Michael Krigsman: It sounds like cognitive architectures, as you're describing, ultimately solve a much broader class of problems, but machine learning and similar technologies have an immediate and direct payoff in the present. Therefore, it's natural; not to mention the hype. Everybody jumps on the bandwagon. And so, these forces propel folks into machine learning as opposed to the broader set of architectures that you were describing.

Peter Voss: Yes, that's exactly right. There is a good reason for machine learning and deep learning; solving narrow, particular problems that you want to solve. If you can build the model that can solve that problem, then that's great. You can actually get something that works, within that domain, quite effectively.

Yes, there's a lot of hype and a lot of companies feel they need to jump on the bandwagon of machine learning, deep learning without really knowing why they need it. It's just, "We've got to be there. Everybody is doing it, so we've got to do it," where there could be more conventional techniques that have been around for a long time of categorizing or recognition that may actually be much more effective than deep learning.

Yes, there is the hype angle, and then there is the fact that it really does work very well for narrow applications. It's much harder building a system that can potentially learn a much broader range of things. It does kind of make sense that people would focus on that but, unfortunately, it's sort of at the expense of almost exclusively focusing on machine learning.

Michael Krigsman: Now, let's shift gears slightly because I think you've given us an excellent background. Thank you for that because I've been trying to understand the distinctions between these different types of AI. It's really hard to get simple, clear information. Now, tell us about the types, classes, or categories of problems that you're trying to solve and how you're applying these techniques, as you've been describing, to your own company and your own commercial results, products.

Peter Voss: Right. Yes. Before I go into that, I just want to say that artificial general intelligence, of course, is about building machines that can think, learn, and reason but, specifically, can learn interactively. Potentially, they could also be animal-level intelligence. It could be a robot that can learn to interact with the world.

Our focus, when we did our initial research, we actually did quite a bit of work in that area. We had a virtual mouse and a virtual environment that learned to navigate. It had virtual whiskers, ears, yes, and so on. But, we came to the conclusion after quite a bit of development work that focuses on natural language or on human language was a more sensible approach for us to achieve, to work towards AGI.

Our focus has really been on natural language. In 2009, our technology had been devolved sufficiently to commercialize it in the call center space. I launched a company called Smart Action that's now called SmartAction.ai - it has to be a .ai company.

We have the intelligence engine running conversations for automating call center calls every effectively. Mostly, when I tell people that about Smart Action and that we're automating calls, they say, "Well, I hate these things. They're horrible. I always hang up or press zero to get to an operator." I say, "Yes, that's because they were using the wrong technology. They're using wave one technology to try and solve the problem."

With our cognitive engine, we can offer much, much better experience there. That it can really understand you. It has memory. It remembers what you said previously. It can disambiguate things and so on.

Smart Action has been quite successful in commercializing our technology. But, it's still a relatively narrow application because, I the call center, of course, you're just trying to solve a particular problem, like AAA is using it for roadside assistance; MGM Hotels is using it for "How can I help you?" to route the calls. Terminix was using it for scheduling appointments so that you could say, "Do you have an appointment? I want to make an appointment next week? What do you have?" Then our agency might say, "I have something on Thursday at 3 o'clock." You might come back and say, "Do you have something a little later?" You could have that kind of conversation. They're still relatively narrow, and our ambition really is to have a broader intelligence and, importantly, an ongoing conversation that you have that the system learns about you and gets your history, and that becomes more and more useful.

Five years ago, I handed over management of that, of Smart Action, and started Aigo.ai to focus on cranking up the capabilities, the intelligence of our system so that it can have more complex conversations and a broader range of applications. Now, we are looking at applications from putting our Aigo brain into a robot, for example in a hospital or hotel, that it could remember previous transactions and be useful in that way. It could go into a games engine. It could be used for helping people manage diabetes, or it could be used as a natural language interface to complex software.

There's a big demand that software is becoming that complex, especially enterprise software, that people ideally want to just be able to talk to their computers and say, "Show me the sales result the last three months in Europe," and then it pops up. Then you want to be able to say, "Exclude Britain," because they're going to Brexit, and it should just do that. It should also have a memory that you could say, "Run that cash flow report you did for me last week," and then it should say, "Well, do you mean the one for Argentina or the one for Brazil?" so that you have this personal assistant to help you with complex software.

Then, for us, the ultimate app is a personal assistant that stays with you permanently, that learns about you, and that can really be much more useful than the current chatbots that have out there.

Michael Krigsman: That personal assistant, that's a very hard problem. Even a calendar, there are a number of companies that are using AI, cognitive bots, various or, in some cases, just simple chatbots, to try to schedule. I looked at this. Once you dig into the complexities of something as simple as scheduling a meeting, there are so many variants. To do that well, I'm not sure if anybody is doing that extremely, extremely well today. These are very hard problems - deceptively hard.

Peter Voss: Yes, I think you put your finger on something very important. What we as humans consider simple, trivial, scheduling an appointment, just relies on a lot of common sense or even beyond common sense - specific knowledge.

Let's take the example of scheduling an appointment. Let's say you want to schedule an appointment with an investor and there are three or four other people in the appointment. The subtlety of how you word it, what flexibility you allow for the various players in that, who is more important than somebody else, and who is optional, but the subtleties of that and how you word that, you couldn't just hire somebody off the street to do that for you. That's why a CEO secretary is a highly skilled person who understands these sorts of human dynamics. That's going to take quite a while for AI to get to the point where it really has that sort of common sense knowledge, but we can get there. It is actually a deceptively hard problem to solve of scheduling.

We have to pick our fights in terms of a personal assistant. You have to learn what is it good for. Scheduling appointments within your group in the company, yes, fine, I think you can do well. But, people also think that planning a trip or a vacation is trivial. Well, actually, it isn't.

A recent trip, just a simple trip to New York, I spent 45 minutes just looking at the options. Do I want to get there an hour early or later? How reliable is the airline? Then what hotel? How far away do I want to be, and all the different things? If it's very routine and standard, if you're always there at the same kind of hotel and same trip, sure; you can do that. But, there are many things that we do that actually are quite complex.

The beauty of our approach and with cognitive engine is that the system remembers what it did for you before and it can reason about it. It can ask you to clarify, and it'll then remember that. We inherently have the technology to solve these problems robustly. But, yes, it's a long way to go to human level performance and common sense.

Michael Krigsman: You raised an interesting point that, even in something as seemingly simple as scheduling a meeting, it brings in issues of emotions, perception, and perceptions of social status and social hierarchy, things like that. And so, as you're building the system, your system, how do you introduce and how do you think about the architecture of the emotional and this perception aspect of the AI as it interacts with people so that it looks and feels right?

Peter Voss: Yeah, that's an interesting question. Now, an AI needs to be able to recognize emotions, but there is already quite a bit of technology to be able to do that, to recognize emotions and then to be in the right mode. We call that sort of metacognition that you have an overall sense of where your own thought process is and then a theory of mind of what the other person's thought process is. Emotions are obviously one element. Is somebody in a hurry? Are they aggravated? Are they happy, or whatever it is? These are pretty subtle things. Again, it's something that we'll get better at over time.

I think one of my hobbyhorses here in this area is that companies are trying to fool people into believing that the AI is a human. Google with their Duplex just did this demo a few months ago where they put ums and has in there and so on. I think that's fundamentally wrong.

I think when you're dealing with an AI, you should be upfront that you are dealing with an AI, and people would frame it correctly and realize that, well, yes, it's not going to have that same emotional content, but it has other skills. It has a photographic memory. It has an instance to a lot of information that can calculate very well. I think it's much more useful for people to know that they're dealing with an AI so that they can capitalize on the strengths and not be sidelined by expecting it to respond to subtle emotional cues as effectively as a good human would. [Laughter]

Michael Krigsman: Why do you feel that way? Is it an ethical issue? Are there practical, commercial reasons? Why do you have that perspective?

Peter Voss: I think it's both. Yeah, absolutely. I think, ethically, I don't feel comfortable trying to fool people into believing that you're talking to a human when you're talking to a machine. It just, yeah, is not right. Any sort of dishonesty, you're off on the wrong track.

From a practical point of view, as I mentioned, also, if people know that they're talking to a computer, they can adjust to that. Now, the counterargument is that they may hang up on you or whatever. You need to give people what they want.

My experience in Smart Action at the call center automation was, you can tell people that they're talking to a computer, but it has to work. People are totally happy to talk to a computer if, immediately, they're getting results. It doesn't come up with a menu and say, "Your business is important to us. Our options have changed recently. Please listen carefully. Press 1," for whatever, you know, and then go through a long list of things.

Well, of course, people will hang up on that, and so they should. Companies shouldn't be allowed to offer that kind of technology anymore. [Laughter]

If your AI says, "How can I help you? I'm your AI helper," if you want to be explicit about it, or if you want to give it more subtle cues like just the voice is a little bit robotic that you know that it is a robot - however you want to do that. But, if you immediately say, "How can I help you?" and the system understands you and says, "Oh, okay. I can do that. I can solve that for you," or, "I can transfer you," or, "I see you have an order that you're waiting for. Are you calling about this?" If it works, then everybody is happy, and people don't get misled. They have the right expectations.

Michael Krigsman: We have a question from Twitter, a good question that I'm remiss in not having asked because it's kind of an obvious one, which is, "What are the applications or the use cases of artificial general intelligence that you're developing or working on at Aigo.ai?"

Peter Voss: Okay. Well, thanks for that question, of course. I already mentioned some of them. I didn't specifically say that that's what we're doing. The two biggest applications, one is in the consumer space as a personal personal assistant. In fact, that's our tagline, a personal personal assistant.

The reason we came up with that moniker is, personal is really, really important. Personal has three different dictionary definitions, all of which are important.

  • The first one is personal ownership; you own it, which currently, the current chatbots and so-called personal assistants, you don't own them. It's some mega-corporation that owns and controls it to serve their agenda. With us, it's yours. You own it, and you own all the data that goes with it. That's the first meaning of personal.
  • The second meaning of personal is personalization or customization, that it adapts to you. It learns. It's permanently with you, and it gets better over time as it learns more, so the customization aspect.
  • The third meaning of personal is private confidential. Basically, it will only share information you want it to share and who you want to share it with. Some information, you may not want to share with anyone, some with your spouse, some with your colleagues at work, and so on. You have this permissions-based security.

All of these three items, we feel are really important. That's why we call them a personal personal assistant.

Then, of course, the other thing is that it has this cognitive architecture and a much higher level of intelligence, of memory, and reasoning and so on. That's a key application. Of course, breaking into the consumer market with a new product like that will take time, so we're actually building the community of users. We can maybe talk about that in terms of how we use blockchain for that.

The other area is in enterprise. There, we are really looking at, on the medical side, for diabetes management or for coaching and for robotics. We have interest in all of those areas. The area that we see as probably the first area we go to is totally focused on, is this software frontend, this intelligent assistant--we call it a copilot--for software, for complex software, whether it's Salesforce, SAP, or some business intelligence software, QuickBooks, or whatever, that you can basically talk to yourself? It can help you, if you don't understand something, without having to go through many menus to get to where you want to go and then fill out a form of what you want to do. You can actually just talk to the software.

There's huge demand. Basically, all the software companies are looking for a solution like that, and they're finding you can't solve it with conventional chatbot technology. You really need something more cognitive, a cognitive architecture. That's a huge, huge area, and we're really well positioned to target that.

Michael Krigsman: Well, I'm looking for just a calendar assistant. My expectations, as a consumer, are much lower. I just want a calendar assistant that will schedule things and do it well. Whenever you feel yours is ready, I'd love to try it out. [Laughter]

Peter Voss: All right. Well, you can register your low serial number for Aigo and be the first on the block to get one. Go to my.aigo.ai, and you can register and be one of the first users of our personal personal assistant.

Michael Krigsman: I will do it: my.aigo.ai. Now, you mentioned community, and you mentioned blockchain. I'm glad you did because I wanted to be sure we speak about that. Where is the intersection now of blockchain and what you're doing?

Peter Voss: Yeah. It's just sort of a fascinating field that just emerged in the last few years. Really, it's exploded in the last year where blockchain technology has become available. There are a lot of people, well, more and more people with expertise in that area.

What a blockchain is, it's a secure ledger that you can basically record things securely. It's tamper proof, inherently. With most applications, also the blockchain is decentralized so that you have that extra security. There are many copies of the ledger that exist, so you can't just have one person fudging it or whatever.

Now, a separate aspect of that that goes hand-in-hand with that is cryptocurrency. Blockchains themselves can be used without cryptocurrency just to have a secure ledger, but many of the applications of blockchains actually involve a cryptocurrency. That's really, really fascinating that, on the one hand, we have Bitcoin where people just see it as a value, a store of value, and ability to basically exchange value across borders.

But, then you get a cryptocurrency or platform like Ethereum, where it goes beyond that, where you have the currency element of Ethereum. It has value in itself like Bitcoin, but it also serves as a platform to allow other people to use this blockchain and to have smart contracts so that you can embed smart contracts.

You really have the three things. You have the ledger, you have smart contracts, and you have a currency. The beauty of it, with Ethereum, you can create your own local currency for your project. That's basically what we're doing in the consumer market.

Michael Krigsman: Where is the intersection with artificial intelligence? Why are they natural partners? I know that's your perspective.

Peter Voss: Right. There are quite a few different applications with blockchain and AI. In fact, I just came from a conference, Brains and Chains in New York, and people were talking about how AI can help improve the blockchain. That's one thing: using AI technology to optimize the performance of blockchain. That's one area of where they come together; probably not that well explored.

Then the other aspect is where the blockchain can help deploy AI in some way. There are a number of companies offering services where they're creating a marketplace of AI capabilities. That could either be algorithms, programs, AI programs and, in most cases, they are narrow programs.

Very few people are working on AGI. In most cases, it would be some marketplace for AI algorithms. Anybody who could develop an algorithm can sell it. Think of it as an eBay of algorithms they're trading, and then maybe use the platform in some way to coordinate these things.

The other AI angle is a marketplace for data because, with machine learning, deep learning, data is really, really important. The more data you have and access you have to data, the better. People are using blockchain to create a marketplace to say, "If you have some data that's valuable, put it in our marketplace. Put it on the blockchain so that it secures, your ownership is registered and then, maybe using smart contracts, that you can get paid for it if somebody uses the data." It's AI algorithms and data that are put on the marketplace.

Now, our own approach on where the blockchain ecosystem is so useful for our personal personal assistant is in three different ways. The first way is that, because you have an instance of Aigo, you own your own personal assistant, it's yours, each one has a serial number. That serial number is recorded on the blockchain permanently. It's against your name. Your ownership of Aigo, the personal assistant, together with all the data, is secured on the blockchain in that ledger. That's the first thing.

The second thing is, because people can also teach Aigo new skills, for example if you are good at helping people manage stress, then you might teach Aigo how to help people manage stress. That app or that skill, you can then put on the Aigo store and sell it to other people. The blockchain smart contracts keep track of royalties that need to be paid or, if it's a one-time fee, whatever you determine how you want to be paid for that. That's the second important use, having these smart contracts that get executed automatically. There's no middleman involved. There's no extra overhead involved in terms of the paperwork. It just happens on the blockchain.

Then, the third aspect of it for the community is to have an Aigo token as a currency for people to actually use those Aigo services and Aigo intelligence. The beauty of that is, because it's its own local currency with a fixed supply and, as the community grows with increasing demand, the users, the community benefits from that local currency that we have. That's really a new business model. It's really very innovative, and it's fantastic for building a community, rewarding a community, and that community really working together to achieve the bigger objective.

Michael Krigsman: For you, blockchain is part of a business model and go to market--in a way, the business model is very much related to go to market--rather than technology. It's obviously very distinct from the AI technology that you're developing. It's in the service of bringing your technology products to market. Is that a correct assertion?

Peter Voss: Yes, that is correct. It's an integral part of assuring that people's ownership is recorded, that the transactions can happen smoothly between creators and consumers of Aigo intelligence and, basically, making that community work well. That's correct.

Michael Krigsman: Peter, we have just a few minutes left. Can you look into your crystal ball and tell us what's coming up the pike, in a practical, reasonable way, that's going to come to market in the next, say, two, three years? Rather than looking ten years out, where is AI going in the next couple of years?

Peter Voss: Well, a crystal ball is always difficult. There's certainly a huge amount of momentum on deep learning, machine learning, and big data. That will continue. There are many applications where that's the right tool to use.

We have the specific, narrow, sort of static problem to solve, and you can get enough training data to train a model and then have that sort of read-only model or likely read-only model execute. That will continue, undoubtedly. I think people and companies will become more savvy to figure out where that actually works and where it doesn't work.

A few years ago, people thought that natural language could be solved by machine learning, deep learning. Just give it enough data. By now, it's pretty universally accepted that meaningful natural language conversations, an ongoing conversation, cannot be handled with just deep learning by itself.

I see the cognitive architecture approach that we have, I think, will play a bigger and bigger role as companies are realizing the limitations. More generally, even saying the sort of AGI approach that you need to have a system that can learn instantaneously, unsupervised, that you don't need a lot of reliable data, like the example I gave. You see a giraffe once. Okay. I now know what a giraffe looks like. In perception and action, that the systems can learn immediately by interacting with the environment. I think people are still looking for those kinds of solutions, and I think they will soon discover that the cognitive architecture is the way to go.

Michael Krigsman: Okay. We are out of time. I wish we had more time. What a very engaging and fast conversation. Peter Voss, thank you for taking the time and being with us here today.

Peter Voss: Thanks for having me. That was fun. Thank you.

Michael Krigsman: We have been speaking with Peter Voss, who is the CEO and the chief scientist at Aigo.ai. What a very great and interesting show this has been. Be sure to go to CxOTalk.com and check out our other videos, and don't forget to subscribe on YouTube. Now is the time, and tell your friends to do that too. Thanks so much, everybody. I hope you have a great day. Bye-bye.

Published Date: Jun 29, 2018

Author: Michael Krigsman

Episode ID: 527