Machine learning is not harmless it up scary evil-纪元1701

Machine learning is not harmless to its evil death machine learning will "black" Phoenix Technology News Beijing on October 28th news, according to foreign media reports, although the machine learning technology innovation is the hottest topic nowadays, but it is still a lot of trouble and worry. One of the most obvious is that the technology is infringing on the privacy of each of us. To mimic the human, machine learning everything must be uprooted from the most intimate of human". Therefore, we should protect its own unique, but also find out the control mechanism of machine learning, otherwise I am afraid that one day it will use our identity to do some illegal activities, we will stand on. Some of the examples below will show us the "black" machine learning how horrible. Of course, its nature is good, but fall into the hands of bad people may cause great harm. At the same time, this paper is on everyone’s warning, in enjoy the technology brings convenience at the same time, we should also grow a heart, otherwise the adverse effects may engulf the human society. If the face recognition deviation of the neural network and deep learning algorithm results in picture recognition and treatment significantly, they make our social networking platform, search engine, game machine and authentication mechanisms have become more clever. However, they will embark on the path of evil? Apparently will. FindFace is the best example of face recognition applications. The application was born in Russia earlier this year, the user can use the face recognition system to see if someone is also registered VK (Russia’s Facebook). In less than a year, this application in Eastern Europe has won more than 200 million users. Because even on the VK image database, this application ability to identify super. However, many people use it not to use the social, the deviation of the final. For example, the Moscow police department used the FindFace technology and implanted it in the capital’s 150 thousand monitoring probes. In addition, there are many people who take this technology to zhuogan. Experts at the Kabasiji lab have shared a way to trick the face recognition technology, but it is too tiring to keep that position. So, the self drying or careful on social networks, after all, once a machine learning engine will upload photos from a "warehouse" will absorb these photos, then as a photo will lead to what the storm we can make nothing of it. After the mosaic of the machine learning is the most commonly used techniques to protect the privacy of images and videos, with a mosaic cover, the human eye can no longer identify those key sensitive information. However, the machine learning will be "perspective", it can be completely free of mind. And Connell from Texas University of science and technology R & D personnel recently successful training a set of image recognition machine learning algorithm, the algorithm can easily fix code technology at present, the sensitive information to see through. Once the technology falls into the wrong hands, the consequences can not imagine. It is reported that, after training, the neural network in the recognition of the face, objects and handwriting accuracy can reach 90%. R & D personnel this is not to do bad things, on the contrary they want to remind the relevant department.相关的主题文章: