FEED Issue 05

60 FUTURE SHOCK MICRO FOCUS

generating a huge amount of metadata about those programmes, and then they’re comparing that to Nielsen and other audience analytics. And they’re trying to find a pattern between people turning o a programme and the content. What was said onscreen when people started to turn o that programme?” The uses for the metadata produced by the Micro Focus analytics AI are bounded only by the imagination of the user. Consequently, the tools are designed to be flexible and easy to integrate into customer’s systems. “Everybody is ever so slightly dierent, and we have to cater for those changes. It’s all API based and we have open-sourced the applications, so people can add on to them, and you can build or modify your pages to incorporate our solution.” “One of the problems we have is that we can give you all this great metadata, but it will be hard for you to produce an

application that exposes the metadata as well as we can. So, we provide these free open-source widgets that can nicely expose our metadata in your application. You’ve got timelines and scrollable text, and when someone’s speaking the words are highlighted. All that plumbing, we provide.” THE INTERNET OF RECOGNITION Well aware of the booming media consumption on personal devices, Micro Focus is now focusing its eorts on mobile. “We’re working on analysing rich media that comes from mobile devices, whether that’s drones, body-worn cameras, or iPhones, and being able to index the huge amount of information provided by them. Using motion from a video we can eectively use it to create 3D models.” The potential for 3D modelling of anything shot on a mobile camera is jawdropping. It eectively turns the camera on every mobile device into a 3D scanner.

TOPIC MAP The multi-coloured graphic below shows a set of key concepts automatically derived by Micro Focus’s AI technology from a massive set of data

“Once you know that you’ve got that 3D map from multiple devices in an area, then you get something called the Internet of Recognition. You’ve heard of the Internet of Things, but that’s basically dealing only with static devices and points. But when I can look at an incident on the street with these multiple 3D points of view, I can get a fully rounded sense of what’s occuring.” The company is also releasing a new tool called Dynamic Corpus Connector. Rather than bringing the video to the AI to be analysed, the Connector AI actively searches for the content to analyse. “The thing that goes out and collects the data is now empowered with artificial intelligence, so it’s eectively looking at a repository to find the things you need. My analogy is, where previously, when open cast mining for gold, you dug a hole in the ground and filtered extracted material to find the gold, now, instead, you find the vein and follow it. So your use case will be empowered to be faster and better.” The Dynamic Corpus Collector will be useful on the dark web, which is not indexed and cannot be searched. It can search for things like pirated content. “It will be incredibly powerful,” says Humphrey. If the progress of AI over the last couple years is anything to go by, that may be a big understatement.

feedzine feed.zine feedmagazine.tv

Powered by