Hardware & Technical Computerphile: Series on AI security and unexpected behaviours with Robert Miles

That's another funny thing. Miles talks about value, but economist don't agree on a definition of value, which sort of puts value in the box with consciousness. The one with the sign saying "?". If we can't define value, it's going to be hard to describe to an AI. Look at it from another perspective. Maybe our minds, being weakly emerged properties from a lot of hardwired identical neurons, has come to the point where we can send people into space, because of all the machine learning our brains have done. Or the perspective where any species is pretty quick to make utility discisions that might cause harm to another species or even a species member. Hehe...

Btw. Miles packs a pretty good punch :D

Source: https://www.youtube.com/watch?v=yQE9KAbFhNY
Hahah, yeah. :)

As for the value, you're right of course. Value is a strictly subjective thing. What value could mean to an AI would probably be determined by its reward function. Just like humans, we value things that give us positive feedback or things that empower us (e.g. enable us achieving our goals, therefore, again, result in positive feedback)
But how to define value to a machine is the question.
I like his example with making tea. We can't just teach the machine to make tea. If we show it how to make tea, we don't want it to make itself a cup of tea, we want it to make us a cup of tea. Therefore its reward function must be tied to our state, not its. If we want a successful tea-making AI, it has to value our well-being. Which in itself is a concept so abstract that it can't really be coded into reward function. :LOL:
 
Another interesting thought comes when you combine the Brain in a Jar theory with information theory. The brain receives information only through the senses. Sight and hearing, balance etc. is not that hard to reproduce, and the actual amount of data the brain receives per second is not that high "anymore". The amount of synapses in a human brain is literally beyond understandable, but so is the processing speed of my computer, and that still follows Moore's Law, believe it or not. The brain doesn't.

Human DNA contains all the information needed to build a human, including proteins and unique personal traits, but it fits roughly on an old school CD-rom.
 
Another interesting thought comes when you combine the Brain in a Jar theory with information theory. The brain receives information only through the senses. Sight and hearing, balance etc. is not that hard to reproduce, and the actual amount of data the brain receives per second is not that high "anymore". The amount of synapses in a human brain is literally beyond understandable, but so is the processing speed of my computer, and that still follows Moore's Law, believe it or not. The brain doesn't.

Human DNA contains all the information needed to build a human, including proteins and unique personal traits, but it fits roughly on an old school CD-rom.
When I think about human brain vs. a computer, it always seemed to me that (to my almost non-existent understanding, of course), rather than a "processing unit", the brain is more akin to millions or RAM sticks, which are constantly jump-wired and re-wired together. We don't as much "process" information as we connect it. We learn and think through association and I think when true AI emerges, it will have to be similar to that.
 
When I think about human brain vs. a computer, it always seemed to me that (to my almost non-existent understanding, of course), rather than a "processing unit", the brain is more akin to millions or RAM sticks, which are constantly jump-wired and re-wired together. We don't as much "process" information as we connect it. We learn and think through association and I think when true AI emerges, it will have to be similar to that.
The brain is both able to process information, and it is able to rewire synapses itself. I think it's called the brains plasticity. We also know that a damaged brain can rebuild some of the capabilities it has lost. The brain is complicated, and there are many things about it we don't understand, but we have a much better understanding of it, than just a few decades ago.

Some argue that the brain/computer comparison is just because we live in an information age. When steam engines was the thing, that was the way of explaining thought, etc. They miss the fact that information has become a substantial part of physics. The classic problem for dualists like Descartes was that if the mind (soul) was supposed to exist in a non-physical realm, how could it have any contact with the physical realm? It's partly caused by mixing of the words material and physical. Something can easily be non-material but still exist in the physical realm. Think of Excel running on a computer.

I think this subject started interesting me, apart from normal existential questions, when I began reading about animals and consciousness. I grew up, when consensus was that animals were biological "machines" with instincts and no consciousness. The Mirror Test is often used to verify the level of self awareness among children. Self awareness considered something that seems to demand a consciousness. When researchers started using the mirror test on animals, some quite remarkable results were found. It can't come as a big surprise to any dog owner that some animals show signs of both intelligence and consciousness, but the last animal I read having passed the test was an ANT! I have critically read the science article, and it seems like a well planned solid experiment. Boy, was my biology teacher wrong! Imagine that. An ant.
 
The brain is both able to process information, and it is able to rewire synapses itself. I think it's called the brains plasticity. We also know that a damaged brain can rebuild some of the capabilities it has lost. The brain is complicated, and there are many things about it we don't understand, but we have a much better understanding of it, than just a few decades ago.

Some argue that the brain/computer comparison is just because we live in an information age. When steam engines was the thing, that was the way of explaining thought, etc. They miss the fact that information has become a substantial part of physics. The classic problem for dualists like Descartes was that if the mind (soul) was supposed to exist in a non-physical realm, how could it have any contact with the physical realm? It's partly caused by mixing of the words material and physical. Something can easily be non-material but still exist in the physical realm. Think of Excel running on a computer.

I think this subject started interesting me, apart from normal existential questions, when I began reading about animals and consciousness. I grew up, when consensus was that animals were biological "machines" with instincts and no consciousness. The Mirror Test is often used to verify the level of self awareness among children. Self awareness considered something that seems to demand a consciousness. When researchers started using the mirror test on animals, some quite remarkable results were found. It can't come as a big surprise to any dog owner that some animals show signs of both intelligence and consciousness, but the last animal I read having passed the test was an ANT! I have critically read the science article, and it seems like a well planned solid experiment. Boy, was my biology teacher wrong! Imagine that. An ant.
That's awesome. :)
Yeah, we're advancing it this. I think mirror test is a nice scientific experiment, because it has measurable results and therefore produces something scientists can put down on paper, but it isn't even necessary when you think about it. Anybody who ever owned and animal knows they have feelings. They can be playful, vengeful, they can grieve or feel guilty, some can be great at lying, even. Those qualities require self-consciousness. If you can think about your environment in terms of what you want or what you did/will do, you have to have a concept of "self".
 
A lot of the thought going into making AI comes from philosophy. Mainly Philosophy of Mind. That has a clear connection to metaphysics, which after many years of being banned in science, slowly is coming back. Science seems desperate to keep away from any metaphysical questions like "Why is there something rather than nothing?", or my personal favorite "Where did the laws of nature originate? Who came up with those?". It's only when you get something like the Inflation Theory, that science go back to discussing subjects like what caused the Big Bang. Before that it was more like "Talking about 'before' the Big Bang resembles talking about north of the North Pole".

Also, the classic reductionism is being challenged by holistic science. We used to (and still do) smash whatever to pieces and look at those to get an understanding of the parts, upon which we then build up our understanding of reality. Called bottom up approach. Then we realized system theory, etc., and that properties can emerge when combining similar thingies that does not have the property in themselves.

You can pick apart an old mechanic wrist watch, and understand how it works, but none of the single parts can show you the time. That property emerges, when you put the parts back together. Some argue that the watch is a bad example (I do), but it's easy to understand. Also, I think that the holistic approach is being somewhat mystified. You could "easily" explain the watch using classic reductionism, but among the scientists I know, the broad feeling is that holism can be just as good as reductionism, if it's falsifiable. On top of that even simple neural networks had never come "to life", had it not been for holism. Finally demystification went down the drain with quantum mechanics. Nature is weird.

Holistic science has an almost occult ring to it, but it's proper science when done right. Reductionism can be regarded as a point of view. Holism as another. As long as they both show us the same reality and agree, they are just different tools in the box.
 
The main issue is probably the fact that if we ever were to bring true AI to life, we can't afford to (and probably won't even be able to, anyway) to let it do something we don't want, then "disassemble" it and try to find out what went wrong. Plus even with our current software you don't just fix one thing without breaking ten others and the level of complexity of an AI code will be magnitudes higher.
That's why I think the most important point in the whole set is that we need to have this figured out before we do something stupid. Which goes entirely against what we (humanity) usually do. :LOL:
So yeah, psychology, philosophy, etc. will be an important part.
Humanity has "jumped the boat" many times before and we often came up with technology we "weren't ready for". Hell, even the whole concept of western society isn't working properly because people prefer exploitation before cooperation. So the danger of misused and consequently lose and rampant AI is more than real
 
Last edited:
System theory is a relatively new tool in the box, but it's also very exiting. It can tell us things about reality that classic science can't. A lot of them are hard to swallow.

The example with the watch is simple, but imagine building a model of a living cell, we could call it a neuron, building up from the particles it consists of, via atoms, molecules, organelles up to the cell. We would still have little understanding of the whole brain, and the model would be very complicated. If we look from another perspective, from top down, we see other patterns. There are strict rules to how particles work, but there are also rules about when a neuron fire, and we even know why. So instead we could build the model of a cell by looking at inputs and outputs, dig into that.

With system theory we can pick our starting point, say a brain, and then work our way down. Looking at inputs and outputs first. A good way of making such a model is to incorporate current AI. Considering how far we are, and the conscious ant, I would also say we have to stop and breathe, but trust me, that's not gonna happen. System theory taught me that. The way everything develops is like one giant system, and a giant system evolves with a lot of inertia. We humans actually have very little influence on the future, even though we think we define it. Hopefully HAL is a nice person ;)
 
I did my first BASIC program in 1978, as a teenager, so I know what you mean ;) The code has become very sloppy.
Coding may have become "sloppy", but on the other hand, it's much easier to think through and optimize a 600 lines of code you wrote yourself somewhere in a garage, than it is a couple of millions of lines of code of some modern programs or games. Programming is in a phase where one person is barely capable of creating a functioning software of modern standards and once you introduce more than two hands into the process, sloppy is what you get.
 
I think another reason for sloppy code is that back then hardware was expensive but programmers were relatively cheap, and not particularly rare. Almost anyone learned basic BASIC once it took off, and BASIC was a lot more low level. In the UK, BBC did the Computer Programme, which was aimed at teaching the population how to get started. Therefore money was spent on machine code and assembler, to squeeze every bit out of the hardware. Nowadays, due to Moore's Law, hardware has become cheaper than manpower. Fair enough. My computer is plenty fast in most cases, and in the critical cases, the code is often optimized. Still, a C64 booted in a few seconds. :)
 
My solution to the AI threat would be to make sure we improve ourselves faster than we improve our (independent) machines.
 
My solution to the AI threat would be to make sure we improve ourselves faster than we improve our (independent) machines.
The trouble is that by definition, AI improves itself. It's not programmed per se. It's taught the basics and then it evolves. And it evolves a hell of a lot faster than human brain.
 
The trouble is that by definition, AI improves itself. It's not programmed per se. It's taught the basics and then it evolves. And it evolves a hell of a lot faster than human brain.
It's been said that the mind is what the brain performs.

I'm suggesting we augment or replace the human brain with something that performs our minds better, before we go about creating strong/general AI, so that when we do decide to create it, it won't have any advantages because it won't be fundamentally different.
 
It's been said that the mind is what the brain performs.

I'm suggesting we augment or replace the human brain with something that performs our minds better, before we go about creating strong/general AI, so that when we do decide to create it, it won't have any advantages because it won't be fundamentally different.
Oh yeah. I'm a big fan of future human augmenting and if we nailed down consciousness and what makes us individual, uploading our mind to a machine where we can remain "what we are" only with much improved mental capacity would (could) be great.
But we're long way from that bridge. Maybe an AI could help us develop something like that. :p
 
I think another reason for sloppy code is that back then hardware was expensive but programmers were relatively cheap, and not particularly rare. Almost anyone learned basic BASIC once it took off, and BASIC was a lot more low level. In the UK, BBC did the Computer Programme, which was aimed at teaching the population how to get started. Therefore money was spent on machine code and assembler, to squeeze every bit out of the hardware. Nowadays, due to Moore's Law, hardware has become cheaper than manpower. Fair enough. My computer is plenty fast in most cases, and in the critical cases, the code is often optimized. Still, a C64 booted in a few seconds. :)
The idea that these Snowflake digital 'natives' somehow are more tech savy. They are just uber users. What true invention will the develop? To busy taking selfies.
 
Top Bottom