10/30/2019

Headline October 31, 2019/ '' 'ARTIFICIALLY -STUDENTS- INTELLIGENCE' ''


'' 'ARTIFICIALLY -STUDENTS- 

INTELLIGENCE' ''




EXPERIENCED EDUCATORS KNOW THE importance of using good quality teaching aids to deliver effective lessons to students.

If  tasked to introduce shapes to small children, the teachers could start off with initially cutting out different coloured shapes such as a triangle, circle or a square. Next, they could visibly hold one shape at a time towards the student and announce its name.

So, initially they could display the blue triangle and loudly say ''triangle''. Then they could pick up the red circle and call out 'circle', and finally they could lift the green square and announce 'square'.

This process can be repeated a few times to ensure that the shapes and their associated names are well registered with the students.

However, if the above steps are carefully evaluated, we will observe an underlying problem that results in teaching wrong concepts to the students.

As the shapes consist of different colour, and as the teacher holds a shape and loudly calls out its name, some students are likely to relate the name of the shape to its respective colour.

For example, in the above scheme of things it is completely reasonable for students to learn that triangle means the colour blue, whereas, circle means red and that the square is referred to the colour green.

Although these are incorrect conclusions, nevertheless, in this instance they are logical since the called out names can be linked to either a shape or a colour.

Ofcourse, good instructions also know how to avoid this issue in the first place by simply ensuring that that all the shapes have the same colour.

This small modification in ensures that that the children will correctly associate the name with its appropriate shape and not confuse it with the colour.

In summary, if different colours were indeed used for the shapes, the training could result in students developing two distinctive opposites but consistent types of understanding.

The first type of conclusion could associate the name with its correct shape, whereas, the second type of learning could relate the name of the shape to its colour.

The key observation is that both outcomes are rational with respect to the data that was presented to the students, but only the first one is correct.

Incidentally, what is true for human is also true for Artificial Intelligence Systems which are equally prone to draw wrong conclusions that are albeit consistent with data.

To highlight this point, Marvin Mansky, a leading cognitive psychologist who is considered one of the pioneers of AI, narrated an incident pertaining to an earlier version of Artificial Neural Networks [ANN] that was being developed to distinguish between friendly and enemy tanks.

ANN, as well as its recent instantiation called Deep Learning Systems, consists of several interconnected processing units that transmit signal to each other. The technical architecture of the network broadly mimics the human brain that consists of  extremely large number of linked neuron cells with synaptic connections.

Minsky explained that the network was trained on a set of images duly labelled at being that of a friendly tank or otherwise.

To their pleasant surprise, the network was able to learn, and it achieved good performance in terms of recognising one group of tanks versus the other. Paradoxically though, this high accuracy was achieved without it having learnt anything about tanks.

Coincidently, what had happened is that the image belonging to the enemy talks were cloudy, whereas, the images of friendly tanks were clear. Therefore, the network achieved its performance by simply learning to distinguish between cloudy versus clear images.

Hence, just as before, the learnings of the artificial neural network were accurate in terms of input data, but nevertheless incorrect in terms of its goals to identify the two categories of tanks.  .

The critical issue here is that modern day intelligent systems such as ANN are quiet powerful in terms of both software and hardware capabilities.

If presented with a large set of  data with different categories, their learning process consists of uncovering patterns that uniquely identify each of the respective classifications.

To achieve this objective, they bring to bear their enormous analytical capacity to discover a plethora of signature patterns which are unique for each category. This overabundance of patterns includes a large proportion that are merely coincidental or insignificant or exist due to outright data errors.

The key challenge is that network has no great means to prefer one such pattern over the other, and therefore, it often converges to a solution that may be accurate with respect to data but also outright incorrect as we saw in the case of tank example.

Hence, like how it is with humans, the learnability, of an AI system is tightly coupled to the quality of data that is fed to it.

The Honor and Serving of the Latest Global Operation Research and Thinking on Artificial Intelligence, continues. The World Students Society thanks author, Vaqar Khamisani, Global Director of Insights.

With respectful dedication to the Scientists, Students, Professors and Teachers of the World.

See Ya all on Facebook, prepare and register for Great Global Elections on The World  Students Society, for every subject in the world : wssciw.blogspot.com  and Twitter - !E-WOW! - the Ecosystem 2011:

''' A.I. & U.I.'''

Good Night and God Bless

SAM Daily Times - the Voice of the Voiceless

0 comments:

Post a Comment

Grace A Comment!