Douglas McKinley

Market Development Manager at Thomson Reuters

Based in Milan Italy, Douglas is at the forefront of the latest in Financial Enterprise technologies and solution innovation in Thomson Reuters.
For Thomson Reuters customers and Partners that means Predictive Analytics to improve how customers find, extract and tag data. Semantic Analysis and Learning Machines to generate Sentiment on News and Social Media. Web Screening to identify hidden risks to help protect business. Machine-Learning algorithms to spot suspicious trading patterns and potential fraud.
Through his role Douglas drives the use of cutting edge Thomson Reuters technology like Big Data Open Linked Data, Machine Readable News, Open Platform. And on the horizon on, up-coming Innovations like Artificial Intelligence, Predictive Analytics, Unstructured Data, Ethereum and Blockchain concepts.
In the Enterprise Market Development team he leverages this innovation with Financial Enterprises around continental Europe, helping them understand the power Thomson Reuters has to offer at scale.

 

 

ABSTRACT

The robots are coming. And they need sensible, clean, structured data sets for world domination

If one were to look over the buzzword nominees for 2017, Machine Learning and Artificial Intelligence (AI) can safely be classified as top contenders. And it’s little wonder. AI and its cousins Machine Learning, Cognitive Learning, Deep Learning, Neural Networks have become the undisputed News fodder champions over the last few months. In a large part caused by decidedly threatening coverage with little or no grounding in technology. Ranging from audacious comments from Autonomous Car Billionaires predicating AI fueled World War to misleading (if not fake) News around Facebook engineers panicking and shutting down a dangerously smart AI after bots developed their own language that no one could decipher. It didn’t happen exactly that way.
Artificial intelligence is here and it been here for sometime. Current “deep learning” systems are not news. It just so happens we have we have more data that we did before and a whole lot more computer power to execute, model and analyze efficiently. AI is even closer to most users than we’d care to imagine. From auto-prompt in an email, to robo-advisers to provide basic investment services at lower cost and social media apps to learn about their users, so advertisers can more effectively target the right audience. Its task specific AI like this that will eventually drive us to work each morning and predict when we are about to fall ill.
Hyperbole aside, it’s the code is running behind the scenes that is at the center of AI and that will execute a series of “narrow tasks” to make our life easier or steal our jobs. It depends on your perspective.
But to get the desired results, a properly functioning algorithm requires data. Accurate, clean, structured, data. And lots of it. These days data scientists typically spend 80 per cent of their time searching for, correcting and consolidating data. The remainder dedicated to developing algorithms for task.
If anyone knows about the challenge of structuring and creating data it is Thomson Reuters. The original Big Data fintech our big data content sets and tools turn the 80/20 rule upside down so the financial industry. Using tools to come up with patters learned about datasets and users. In finance and beyond the promise of AI is superseded by legacy technology systems, massive amounts of data and emerging innovations that don’t quite hit the spot. However the AI future might seem its the data behind that will make the difference.