fw: PODCAST

Building the Brain-Prosthesis Interface

RELEASED March 27, 2013
PLAY THIS EPISODE
Episode Summary
How have prosthetic interfaces evolved over time? Is it possible to control an artificial limb through thought alone? What is the future of prosthetics using brain-computer interfaces? Join the conversation with Jonathan, Joe and Lauren.

Male Speaker 1: Brought to you by Toyota, let’s go places. Welcome to Forward Thinking.

Jonathan:    Welcome everyone to Forward Thinking, the Podcast that’s bigger, faster, stronger.  I am your host Jonathan Strickland and I am joined by ...

Lauren:    Lauren Vogelbaum.

Joe:    And Joe McCormick.

Jonathan:    And we wanted to talk about Transhumanism, but specifically about how prostheses have changed over the years and how they are going to change in the future, and the interfaces that we will use between our brains and a robotic prosthetic, for those of us who need to be fitted with them.  And, to really understand about the development of prosthetics, you just have to look back a few decades really, to see how far we’ve come.

Because Dean Kamen, who is a developer of many technologies, one of the most famous being the Segway.  But also, of something called the Luke Arm, which we will talk about a little bit.  Said, you just look at the development.  If you looked at prosthetic legs, they had progressed quite a bit over the last several decades, but prosthetic arms had not.  Essentially if you had lost say a hand, you will get it replaced with a hook, and maybe a few years later they developed it where you could have a hook that would have a clamp essentially on it that would be attached to other muscles and when you would clinch those muscles you could make the clamp [Crosstalk]

Lauren:    Pinchy

Jonathan:    Yeah, you can pinch things, pick things up.  But still very limited mobility, very limited utility.  This sort of state of affairs lasted way too long.

Joe:    It surely not because the designers were lazy right, this is really hard to do.

Jonathan:    Yeah, no it’s incredibly tricky to design a limb that can replace something as versatile as a human arm.

Joe:    When you think about the human arm, you think about how many degrees of freedom you need to do something, as complex as say as playing a guitar or even just like eating spaghetti.

Jonathan:    Sure

Lauren:    Even just picking up a cup when you’re not entirely sure how hard the cup is.  That’s not a thing that you find out until you touch it and you got this incredibly complex feedback system.

Jonathan:    Right, yeah how heavy it might be so you have to be – you know we take it for granted, those of us who have all our limbs, we take it for granted because that’s our daily experience, right.  So, it’s only when something has happened where we need to have a limb replaced, or maybe we were born without a limb, that this really becomes a consideration that we think about for any length of time.  Otherwise, it’s just, this is just life, we can reach over and pick up a cup.

Lauren:    Yeah, yeah, which of course happens all the time in going way back into history with war.  I mean especially in history because we had less good medical technology.

Jonathan:    Yeah, less good was pretty much a market history as far as the medical term technology went.  I tease Vogelbaum but I say worse things on a nearly minute-by-minute basis.

Lauren:    I think the word “good” is a good word.

Joe:    I agree with her.

Lauren:    But then you know you got all kinds of science fiction things like Star Wars where Luke looses a hand and –

Jonathan:    Wait, what?  I’ve only worked my way up to Episode 2.

Lauren:    OMG spoilers.  Or even an Army of Darkness where you know Ash goes back in time and somehow concocts this mechanical hand.

Jonathan:    Then the gauntlet, it’s groovy I will say it’s groovy.

Lauren:    Very groovy.

Jonathan:    For the longest time, we were looking at pretty limited prosthetic arms and really, that’s where a lot of focus on the technology has been recently.  All though, we’ve seen other type of robotic aids, not just for arms but also for things like Cochlear Implants, things like that.  But arms are really what we are focusing on today.

Joe:    It seems that – what’s the difficulty here right.  We can create robotic arms these days that are incredibly precise, and I’m not talking about the kind of arms you put on a person.

Jonathan:    Right, you’re talking about a like a stationary arm that might work on a manufacturing line.

Joe:    Yeah, go look at an auto production facility and look at the amazing arms they can build.  The problem doesn’t seem to be any more the design of the arm itself, but the interface between the arm and the human brain that controls it.

Jonathan:    To be fair there’s also – I’m sorry Lauren, go ahead.

Lauren:    Oh, no, no, no, I was just going to say, you know we still know more about robotic mechanics than we really do about human brain mechanics because the way that neurons work is kind of mysterious.

Jonathan:    Right, and on top of that, just to go back to another challenge.  That auto assembly arm probably weighs several hundred pounds.

Joe:    Oh sure, sure.

Lauren:    And it does one thing to be fair it has one job.

Jonathan:    Right, you have one thing to do today arm.

Joe:    But it did it right, six thousand times.

Jonathan:    Right, but again you are talking about something that was made for industrial use.  You are talking about something that is meant to replace a lost human limb.  Obviously, things you have to worry about are not just the usability but how heavy is it.  It needs to be lighter.

Joe:    And the cost.

Jonathan:    The cost well the cost also, but really I mean putting cost aside for now, how heavy is it.  You have to make it light enough so that a human being can use this comfortably, or else it’s not useful right.  It’s not something that is going to increase someone’s quality of life, which is really what we’re talking about here.  So, it needs to be light.  It needs to be efficient because if you have to constantly plug it in because the batteries are draining, then that would be a quality of life issue as well.  It needs to be versatile.

It needs to be able to give you some sort of sensory feedback, because if you have a robotic arm that has an ability to grip but no feedback, you wouldn’t know how hard to grip something before picking it up.  You can shatter a glass or –

Lauren:    You’re just squishing cups of coffee all the time, every day.

Jonathan:    Exactly

Joe:    Your not a hug your loved ones.

Jonathan:    Right, you wouldn’t want to do that obviously, if you were unable to determine how tightly you were squeezing them without hearing them squeal in pain obviously.  So these are challenges, they are real challenges.  It seems kind of easy to make light of it.  But when you think of that from an engineering standpoint these are real challenges to overcome.

To kind of talk about where we’ve been recently, Dean Kamen again, the guy who invented the Segway, he took on a project that he ended up calling the Luke Arm and he named it after the character from The Star Wars Series, Luke Skywalker, who –

Joe:    The Star Wars Series?

Jonathan:    Yes, it’s a series.

Lauren:    A series of three.  Wouldn’t it be nice if they made some Star Wars prequels?  I bet that would be swell.

Jonathan:    I love making that joke every time we bring this up.  But yes, Star Wars, of course Luke Skywalker has his arm lopped off by his daddy Darth Vader, spoiler alert.

Joe:    Really?  It’s just his hand but go on.

Jonathan:    It is just his hand, but he ends up having more than just the hand replaced because you can see it in the wrist right.  Because, anyway, so he has his hand replaced with the robotic arm.

Joe:    Just go, yeah.

Jonathan:    So Dean came and named his robotic arm after that character Luke Skywalker and what happened was, he was actually approached by the United States Government and they were telling him, listen, we have a lot of service men and women coming back from overseas who have suffered injuries in the line of duty, and while we can do a lot for anyone who has lost a leg, because of the technology has really improved quite a bit so that people can get around with some limited mobility, but better off than they were.  That technology hasn’t really advanced to arms.  So, if someone lost an arm, we don’t have really anything sophisticated to help replace it.

So, they gave him a challenge.  They said we need you to develop a technology that will allow a person who has lost all or part of an arm, to have essentially the same mobility they would have if they still had their arm.  It can’t weigh more than a normal “average” really, I shouldn’t say normal.  An average human arm and it needs to have some sort of sensory feedback, so you know how tightly to grip something when you’re picking it up.

At first Dean Kamen said, wow I don’t know that this is possible because you’re talking about developing something that’s really light –

Lauren:    So advanced

Jonathan:    Yeah, advanced and light.  Those are the two things, right.  It needed to have a lot of technology in it and it needed to have – it needed to be made out of a material with enough power but still be light so that someone would not feel like it’s a burden to wear it.  So, they developed this Luke Arm System.

The early ones had an interface that was completely – I mean it was electronic, but it was similar to like a mechanical system, in the sense that you would have buttons that you would operate, but with your feet.  You would wear – you know your shoes would have the controls in them and by putting pressure on your toes, or the balls of your feet, or the heels, you can make the arm do different things.

There’s a great video that shows an amputee who has lost nearly all of one arm and the other entire arm.  He is wearing the Luke Arm that gives him the ability – I think it’s his left arm that he’s got now with the Luke Arm, where he can do things like, if he leans forward the arm bends at the elbow, so he can bring his hand closer to his face.  If he leans back, then it extends the elbow.  Then by operating a switch by leaning his neck just a little bit, he could change it so he could rotate the wrist by doing those same commands.  So, through a series of subtle, and these are subtle, it’s  not like he has to lean way forward to have this happen, but a few subtle muscular  movements, he can operate this robotic arm.

So that’s one form of interface.  Now, granted, in this case, you really have to train yourself how to operate this robotic arm using all these different motions.  It’s almost like in a way, playing a video game, manipulating a digital character through physical controls, same sort of thing.  You are not sending commands directly from the brain to the robotic arm, your saying, all right, here’s what I need to do.  I need to lift my arm up so I have to put pressure on my toes so that I can give the command to lift up the arm.

Lauren:    It would have to become second nature yeah.  Any prosthesis is going to involve people learning those kinds of commands.

They are making these retinal prosthesis these days that have an array of electrodes in place of the cells, that would normally detect light for you  They are hooked up to a kind of like Google glasses, sort of like a little video camera in glasses that you can wear.  The glasses tell you when you are seeing something.  They send a signal back to the electrodes.  The electrodes send a signal to your brain, but it’s not as if you’re seeing it.  You have to learn how to interpret the messages that are being sent.

Jonathan:    Sure, like you might see blocks that are representing an object in your field of view and the greater the resolution, the more blocks you will see and the closer those blocks will resemble whatever the shape is.  So, in general with these, right now the state of the art, as I understand it, is that it lets you see that there’s a shape in front of you, but it doesn’t really give you a lot of definition yet.  But it’s incredibly promising.

Lauren:    Yeah

Joe:    But we’re always trying to get closer, aren’t we.
Jonathan:    Yeah

Joe:    And there are some people who have gotten amazingly close, as it seems to me, by close I’m talking about the connection between the brain and the movement of the prosthesis in a way that feels natural.

Now, when you go to move your arm, assuming you have a regular human arm that is still attached to you the way most people have.  You don’t have to think about a series of commands to do it its intuitive.

Jonathan:    Right

Joe:    You just think move and it moves.  You think pinch and it pinches.  Could we get there?  Well, that’s something I think a lot of people who design prosthesis have been thinking about for a long time.

Jonathan:    Sure, it’s a great goal.

Joe:    Right, and so I saw a really interesting TED talk by a guy who designs prosthesis and his name is, I think I’m pronouncing this right, is Todd Kuiken and he was talking about a process called Targeted Muscle Re-enervation.  And if I understand it correctly, the way this works is, they can essentially simulate that direct connection between the brain and the movement of the prosthesis.  It works like this, so you have a mechanical arm that has a certain number of degrees of freedom and actions it can do.  Like say move the forearm up and down by bending at the elbow, or pinch by moving the muscles in the arm and the hand, and in us all of these things are controlled by nerve impulses through nerves that go down the arm.

Now they can’t connect those nerves directly to a machine yet, they don’t know how to do it.  We just haven’t figured that out, the way to get the nerve to send a signal directly to a machine to make it do the job, but -

Jonathan:    To essentially build a new pathway for the ones that were lost or maybe was never there.

Joe:    Right, but they can reroute it mechanically.  And, what he showed is, that they would perform a surgery where they would take these nerves out of the arm and they would reroute them to a muscle that is not used much, like near the top of the pectoral muscle and each of these nerves would lead to a small patch of muscle up in the upper pectoral.  So, when the person sends that thought –

Jonathan:    The command

Joe:    Say the command that we use when you have an uninjured arm or a regular arm to pinch, that sends the muscle command and it would make the muscle, in this case the upper pectoral, where the nerve has been rerouted to contract.  Now they can teach the machine, based on sensors attached to that muscle to learn how to do those commands.  So, essentially by creating a mechanical detour for the signal to follow, you can create direct brain to prosthesis communication.  So a person with this arm really thinks pinch and the arm pinches.

Lauren:    So almost instead of teaching the person how to rethink the process, they are teaching the machine how to rethink the process.

Joe:    Exactly

Jonathan:    Yeah, tell the machine how to interpret those muscle contractions that essentially like, when this part of the muscle contracts that means rotate the wrist, that kind of thing.  Because that’s the actual command that’s coming from the brain.

Lauren:    That’s so cool.

Jonathan:    Yeah, that’s really fascinating.

Joe:    I got chill bumps when I was watching it.

Jonathan:    Yeah, it’s a hugely behead and it’s an amazing development and it’s very promising.  And I imagine that the next step would be the direct brain interface where we don’t even have that little mechanical –

Lauren:    Stop over, sure

Jonathan:    Stop over, sure.

Joe:    Because right now there’s a problem with real estate right, you only have so much muscle on your body that you can use to amplify these nerves signals and by doing that you’re taking up muscles that really should be used for other things.

Jonathan:    Right, right, there are only a few muscles that we would not think of as being really necessary for day-to-day life.  You know things that you have but you’re not using them all the time for some other purpose.  Clearly if it were something else that you were using all the time, then that would interfere.  Like when you were trying to actually accomplish one task, it would have a second task going on with your robotic limb because it was misinterpreting the commands.

Yeah, because again, the machine itself doesn’t know any one command from another.  It is just when it detects there is this activity going on that’s a command for it to do something.  The machine itself is not intelligent; it’s just reacting in a very specific way to a very specific input.

So, yeah, I think the next step is the whole brain computer interface, which is going to go well beyond just prosthetics or prosthesis I should say.  Joe corrected me before the show.

Joe:    Yeah, prosthetic is an adjective.

Lauren:    Yeah, it’s prosthesis.

Joe:    Yeah, I think I said it that way earlier today too.

Jonathan:    What do you call it when you have to set your clocks back an hour?

Joe:    Oh, you mean –

Jonathan:    Anyway, yeah it’s exciting to see this development and very encouraging and I highly recommend if you have not gone online and watched that TED Talk, or watched some of the videos about Dean Kamen’s Luke Arm, I recommend watching them.  They’re very inspiring and to see the people who this is affecting.  The people who are suddenly regaining abilities that they might have lost more than a decade or two decades ago, to hear them talk about that experience, is really a phenomenal thing.

A lot of the people who are working on this, on this medical technology, they credit the fact that you’re actually seeing lives change because of what you do, and that’s why they are doing it.  It’s not because there is some sort of lucrative contract involved.  It’s all about, when you see someone’s life change in that huge way and that they are suddenly much more self reliant because of it, that’s phenomenal.  You know that’s a great story.  And those are very inspiring videos.  I highly recommend checking them out.

Joe:    So, I have a question for you all.

Jonathan:    Sure

Joe:    Imagining that this trend is going to continue on as most technological trends do, how far away do you think we are from a time when you can create a prosthetic arm that’s virtually indistinguishable from the arm you’re born with?

Jonathan:    Well, right now we can already create limbs that give some form of force feedback, although usually that’s to let us – usually that ends up being something like a little vibrating motor to it and the more it vibrates, the harder you are gripping something.  So, right now that’s kind of an artificial way of determining how hard you’re gripping.

I would say that this is so much fun because whenever you’re talking about future technologies, it’s always safe to go with we’re twenty years away.  We’re always twenty years away.  It depends on the technology.  Either you are always twenty years away or your always a decade out.

Joe:    Time travel, 20 years away.

Jonathan:    20 years away, 20 years away,

Lauren:    Singularity, twenty years away.

Joe:    20 to 22.

Jonathan:    Yeah, but seriously I mean the complex nature of creating a brain computer interface that is seamless, it’s impossible for me to overstate how complicated that is because honestly we don’t understand everything about the brain.  So, until we have a true understanding of the brain, it’s very difficult to create an interface that’s going to work –

Lauren:    Seamlessly

Jonathan:    Yeah, especially across a population right.

Lauren:    Sure.

Jonathan:    You have to build them almost from the ground up on an individual basis because we don’t have enough of an understanding to approach it from a more general standpoint.  So, final answer, twenty years.  Lauren?

Lauren:    [Laughter] while we’re making up numbers, I’m going to say fifty, fifty years, definitely fifty.

Jonathan:    Lauren’s a pessimist, Joe?

Joe:    I have no idea.

Jonathan:    Why do you ask a question?  Because you just wanted to know?

Joe:    Because you all are smarter than I am.

Jonathan:    Oh, okay well that’s fair.

Lauren:    I do not think that’s true.

Joe:    Don’t comment.

Jonathan:    So, no I’m just kidding Joe, which is a very bright guy, almost human level intelligence.

Lauren:    [Laughter] Oh, snap!

Jonathan:    But anyway, no, no.  This is a really interesting topic, and it’s one of those where I think, when you see the benefits of the technology, it’s – I can’t imagine not being inspired by it.  I find it an incredible story so I’m really eager to see this continue in the future.

Meanwhile, we want to know what you guys think about the future.  What excites you about the future and we want this to really be a conversation, so go to fwthinking.com.  Be part of our group.  You can follow us on Facebook, on Twitter, on Google Plus.  We’re at all those locations.  We’re eager to have this conversation with you.  Find out what makes you excited about the future.  Let us know and we will talk to you again really soon.

 

Male Speaker 1: For more on this topic and the future of technology visit forwardthinking.com.

[End of Audio]

Duration: 21 minute

you might also like

Advertisement
Recent Episodes
Browse Our Podcast Archive   »