Why humans need machines to learn?…

Deepika
4 min readJun 11, 2020

Its ironically funny that the deeper I think about Artificial Intelligence, I cannot help but come back to marveling and yet be puzzled at the same time about Human Intelligence.

So, after doing a navigation problem in reinforcement learning, I have come to understand that we, humans can easily learn something which in fact needs a lot of effort to make a machine learn it. Say, a toddler with only 2 years’ worth of data can easily navigate in a dynamic environment in a few seconds, whereas I spent a month to make a RL agent learn and solve the problem in a much simpler environment.

“We are so complex that we designed the machines themselves,

But just like machines we learn from data too. We absorb a lot of features, rewards and make decisions, which leads to new states/experiences and we improvise/learn from the data we get exposed to. Of course, I remind myself that A.I indeed comes from understanding and studying human minds.

Even then, why do we need machines? Looks like humans are capable of making complex decisions which we can only envision for our machine counterparts. We are so complex that we designed the machines themselves, right? I guess, the answer to why we need machines lies in the amount of data we can take in at a point in time. Humans can’t process that massive amount of data and analyze all of it or can we?!

I have a curious insight about this, bear with me. Is the very fact that we do not possess the capability to ingest huge amount of data, is actually the reason we are able to design and incorporate that capability into machines? Let me break down my reasoning.

Can humans not really process massive amounts of data? But in our waking life, we are thrust with data through various senses. A few consciously, but a major part of it sub-consciously, but we do take it all in. Is that all relevant data, actually used in decision-making?

I was once thinking how I made decisions, why I chose what I chose. After a lot of contemplation and observation I started becoming aware of my decision-making process. For now, I can consciously follow only the simple decisions I make. Let me explain what I observed.

First, I absorb a problem in. I structured myself by understanding the bigger picture of what it is that I was trying to solve. Oftentimes, it was so simple as to what was the right/optimal decision, that I chose instantaneously, almost oblivious to how I arrived at it. This was not the case with the more complex and problems of certain significance.

Sometimes I kept running the problem over and again. I kept breaking it down, trying to put the pieces back together. I kept running(experience-replaying) through various possible decisions and their repercussions. Few problems were solved after running through all possible solutions. But a few still persisted. It was too risky to just make a choice!

Those problems ran its course tearing me apart and then went to the back of my mind, put to rest. After all, there was everyday life to attend to. So, I moved on, went about things, which in turn led to new data. These were unrelated trivial experiences. This data was completely un-correlated to the problem. Then, like a spark, out of nowhere, that problem was solved, the most optimal decision made! Almost every one of us can experience this Eurekha moment if we could observe it. It is a very simple yet such a rewarding experiment to do on one-self. So, what is the relation of all this to the machines?

Everywhere I read about machine learning models, they are searching for correlation between data to make decisions. This makes me think, if we are designing machines to look deep into data, were we designed to be the opposite, to not look too deep into the data, to look at the bigger picture?

“A truly intelligent model is indeed one, that not only learns from relevant data but also from un-related data it stumbles upon to make decisions.

Or wait, maybe looking deep into the data is a process that happens sub-consciously in humans. But why are we not able to understand our algorithms like we do for many complex machines. I think we understand ourselves subconsciously, we are just not aware of it. Is that why, there had to be a concept sub-conscience in our model?!

I think humans evolved to see the bigger picture of things from the massive data we take in. A truly intelligent model is indeed one, that not only learns from relevant data but also from un-related data it stumbles upon to make decisions.

--

--

Deepika

Deepika loves reading, writing just about anything that intrigues her. On a normal day, you will find her pondering over something that arose her curiosity.