Intel, real -time deep fake detection technology disclosure

Intel announced on the 16th that it has developed Flycatcher technology that can detect fake videos with 96%accuracy. The technology was developed as part of Intel’s responsible AI efforts. Intel’s deep-made detection platform is the world’s first real-time deep-fake detector, which provides analysis results within Tillich.

I’ve seen a video of a famous entertainer who doesn’t really act or speaks, said Rilke Emir, a senior researcher at Intel Labs.

Intel’s real-time deep fake detection technology uses Intel hardware and software and runs on the server and interface via a web-based platform. As a software, it uses a variety of professional software tools to create an optimized fake catcher architecture. The development team drove the AI model using Openvinotm for the face and feature detection algorithm.

Computer Vision Block is optimized based on the Intel® Integrated Performance Primitives, which is a multi-thread software library, and Open CV, a real-time image and image processing tool, and the reasoning block is Intel® Deep Learning Boost (Intel® Deep League) BOOST) and Intel® Advanced Vector Exception 512 (AVX0512), and the media block is optimized to Intel AVX2.

In addition, the development team has provided an integrated software stack for the Intel® Ion® Scalable processor family using the Open Visual Cloud project. For hardware, the new deep-fake detection platform can run up to 72 different detection streams on the third generation Intel® Leon® Scale process.

Most of the deep learning-based detectors check the original data, find signs of non-satellites and identify the video problems. Fake Catcher, on the other hand, secures clues in the actual video, measuring the subtle blood flow of human beings in the actual human element, that is, the human elements of the video pixels. The color of the vein changes when the heart exports blood. Thus, the detector collects blood flow signals from the face of a person in the original video and converts it to space-time maps through the algorithm. After this, deep learning can immediately judge the authenticity of the video.

The threat of deep fake video is growing. According to Gartner, companies will invest up to $188 billion in cybersecurity solutions. It is also not easy to detect deep fake images in real time. For analysis, it takes several hours to upload the video to the detection application and the result.

Deception acts caused by deep peaks can result in negative results such as falling media trust. Fake Catcher can help users to restore trust by helping users distinguish the authenticity of content.

You can expect some examples of fake catcher. Social media platforms can use the technology to prevent users from uploading harmful deep fake images. Global news media can use the technology to prevent the manipulated video from accidental reporting. In addition, non-profit organizations can use this platform to disclose the deep fake detection function so that everyone can use it.

Leave a Reply

Your email address will not be published.