April 1, 2023

Tumbler Ridge News

Complete News World

Week of shocking discoveries

This week, researchers discovered the AI ​​method, which allows remote-controlled robots to track attackers’ movements, even if their communications are end-to-end encrypted. Co-authors at Stratklite University in Glasgow said their study shows that using cybersecurity best practices to prevent attacks on autonomous organizations is not enough.

Remote control, or teleoperation, promises to allow operators to remotely control one or more robots in a variety of situations. Startups like Pollen Robotics, Beam and Tortoise have proven the usefulness of remote-controlled robots in grocery stores, hospitals and offices.

Other companies are developing remote-controlled robots for tasks such as demining or exploring high-radiation sites. But new research shows that telecommunications, even if it is said to be “safe,” are dangerous because it is vulnerable to surveillance.

In one paper, Stratklite co-authors describe how they use a neural network to gain insight into the functions performed by a remote-controlled robot. After collecting and analyzing samples of TLS-protected traffic between the boat and the controller, the neural network identifies approximately 60% of the time and “store functions” (e.g., parcel pick-up) “on the rise”. Accuracy “.

A new study by Google and University of Michigan researchers is dangerous in a less immediate way. He looked at people’s relationships with AI-powered organizations in countries with weak law and a “national belief” in AI.

Risk Modeling AI examines Indian users of “financially distressed” instant lending sites targeting borrowers. According to co-authors, users felt indebted to the “blessings” of instant loans and felt obligated to accept strict terms, share sensitive data and pay higher fees. The researchers say the findings explain the need for greater “algorithmic accountability”, especially when it comes to AI in financial services.

See also  The Beninese president has called for Morocco to follow its example

“We argue that accountability is shaped by the balance of power between the site and the user, and urge policymakers to take a purely technical approach to improving algorithm accountability,” they wrote.

“Instead, we call for contextual interventions that empower users, enable meaningful transparency, rebuild designer-user relationships, and encourage practitioners to engage in critical thinking toward broader accountability.”