Jump to content

Recommended Posts

Posted

_95987280_gettyimages-466688316.jpg

Police in Durham are preparing to go live with an artificial intelligence (AI) system designed to help officers decide whether or not a suspect should be kept in custody.


The system classifies suspects at a low, medium or high risk of offending and has been tested by the force.
It has been trained on five years' of offending histories data.

One expert said the tool could be useful, but the risk that it could skew decisions should be carefully assessed.
Data for the Harm Assessment Risk Tool (Hart) was taken from Durham police records between 2008 and 2012.
The system was then tested during 2013, and the results - showing whether suspects did in fact offend or not - were monitored over the following two years.

Forecasts that a suspect was low risk turned out to be accurate 98% of the time, while forecasts that they were high risk were accurate 88% of the time.
This reflects the tool's built in predisposition - it is designed to be more likely to classify someone as medium or high risk, in order to err on the side of caution and avoid releasing suspects who may commit a crime.

During the trial period, the accuracy of Hart was monitored but it did not impact custody sergeants' decisions, said Sheena Urwin, head of criminal justice at Durham Constabulary.
"I imagine in the next two to three months we'll probably make it a live tool to support officers' decision making," she told the BBC.

Facebook AI 'will identify terrorists'

AI lawyer predicts outcome of cases

Is artificial intelligence racist?

The 'robot lawyer’ giving free legal advice to refugees

Ms Urwin explained that suspects with no offending history would be less likely to be classed as high risk by Hart, though if they were arrested on suspicion of a very serious crime such as murder, for example, that would have an "impact" on the output.
Prof Lawrence Sherman, director of the University of Cambridge's Centre for Evidence-based Policing, was involved in the tool's development.
He suggested that Hart could be used in various cases - such as when deciding whether to keep a suspect in custody for a few more hours; whether to release them on bail
before a charge; or, after a charge has been made, whether to remand them in custody.
"It's time to go live and to do it in a randomised experiment is the best way," he told the BBC.
During the upcoming experiment, officers will access the system in a random selection of cases, so that its impact when used can be compared to what happens when it is not.
Bias concerns

Last year, US news site ProPublica published a widely cited investigation into an algorithm used by authorities to predict the likelihood of an arrestee committing a future crime.
The investigation suggested that the algorithm amplified racial biases, including making overly negative forecasts about black versus white suspects - although the firm behind the technology disputes ProPublica's findings.

_95987282_gettyimages-507473994-1.jpg

"To some extent, what learning models do is bring out into the foreground hidden and tacit assumptions that have been made all along by human beings," warned Prof Cary Coglianese, a political scientist at the University of Pennsylvania who has studied algorithmic decision-making.

"These are very tricky [machine learning] models to try and assess the degree to which they are truly discriminatory."

The Durham system includes data beyond a suspect's offending history - including their postcode and gender, for example.
'Advisory' information

However, in a submission about the system to a parliamentary inquiry on algorithmic decision-making, the authors express confidence that they have mitigated the risks involved:

"Simply residing in a given post code has no direct impact on the result, but must instead be combined with all of the other predictors in thousands of different ways before a final forecasted conclusion is reached."

They also stress that the forecasting model's output is "advisory" and should not remove discretion from the police officer using it.
An audit trail, showing how the system arrived at any given decision should scrutiny be required later, will also be accessible, Prof Sherman said.

_95980635_gettyimages-95212739.jpg

There are known limitations to Hart, Ms Urwin said.

For example, it is currently based solely on offending data from Durham Constabulary and does not have access to information in the police national computer.
This means that if someone with a history of violent crime from outside Durham police's jurisdiction were to be arrested by the force, Hart would not be able to make an
accurate prediction as to how dangerous they were.
"That's a problem," said Helen Ryan, head of law at the University of Winchester, though she added, "Even without this system, [access to sufficient data is] a problem for the police."

However, Dr Ryan said she thought Hart was "incredibly interesting" in principle and that it had the potential to be hugely beneficial following extensive piloting.
"I think it's actually a very positive development," she added. "I think, potentially, machines can be far more accurate - given the right data - than humans."

source: http://www.bbc.com/news/technology-39857645

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.