9/09/2018

Headline September 10, 2018/ '' 'TECHNOLOGY BLUE ALGORITHMS' ''


'' 'TECHNOLOGY BLUE ALGORITHMS' ''




EXACTLY WHAT DATA get fed into the algorithms varies by company. Some use ''risk terrain modelling'' [RTM], which tries to quantify what makes some areas crime prone.

One RTM algorithm uses five factors : prevalence of past burglaries, the residence of people arrested for past property crimes, proximity to main roads, geographic concentration of young men, and the location of apartment buildings and hotels.

Some include requests for police help, weather patterns and the proximity of bars or transport stations. PredPol uses reported, serious crimes such as murder, aggravated assault and various forms of theft, as well as the crime's data, time and location.

Most of these algorithms use machine learning, so they are designed to grow more accurate the more predictions they make and the more data they take in.

SOME ANALYTIC PROGRAMMES suck in and link up more data. A joint venture between Microsoft and NYPD called Domain Awareness System pulls data from-

The City's thousands of publicly owned CCTV  cameras, hundreds of fixed and car-mounted ANPRS, and other data resources.

The NYPD says its system can track where a car associated with a suspect has been for months past, and can immediately alert police to any criminal history linked with a flagged number plate.

YOU HAVE A RIGHT TO REMAIN SILENT : So do these algorithms work? Do they accurately forecast where crime will occur and who go onto commit future crimes?

Here the evidence is ambiguous. PredPol touts its 21-month long trials in Kent, an English county, and Los Angeles, which found that programme predicted and helped to prevent some types of crime  [such as burglary and car theft] more accurately than human analysts did.

A trial in Louisiana of a different  data driven predictive policing model, however, found no statistically significant reduction in property crimes compared with control districts.

But even if such approached proved effective beyond a doubt, concerns over their potential to trample civil liberties and replicate racial bias would remain.

These concerns are most acute for algorithms that  implicate people rather than places. The Chicago police department has compiled a ''strategic subject list'' of people it deems likely to be perpetrators of victims of gun violence [both groups tend to comprise young African-Americans from the city's south and west sides].

Its central insight parallels that of geographic predictions : a small number of people are responsible for a large share of violent crime. The department touts its accuracy. In the first half of 2016, it says, 74% of gun-violence victims and 80% of those arrested for gun violence were on the list.

Police they say update the list frequently. When someone new shows up on it, officers will sometime visit that someone's home, thus promoting contact with police before a person has committed a  crime.

Nobody knows precisely you end-up on the list, nor is it clear how [short of being dead] you can get off it. One 22-year-old man, Robert McDaniel, told the Chicago Tribune that police came to his home and told him to straighten up even though he had just a single misdemeanour conviction [he may have been earmarked because a childhood friend with whom he was once arrested was shot dead].

In a study of the first version of the list from 2013, RAND, a think-tank, found that people on it were no more likely to be victims of a shooting than those in a random control group. Police say the list is far more accurate, but have still refused to reveal the algorithmic components behind it.

And both Chicago's murder rate and its total number of homicides are higher today than they were when police started using the list in 2013.

Meanwhile algorithms used in sentencing have faced criticism for racial bias. ProPublica, an investigative-journalism NGO, studied risk scores assigned to 7,000 people over two years in Broward County, Florida, and found black defendants twice as likely as whites to be falsely labelled at high risk of commuting future crimes.

It also found the questions predicted violence poorly; only around 20% of those forecast to commit violent crimes actually did so. Northpointe, the firm behind the algorithm, disputed ProPublica findings.

 The Honor and Serving of the latest Technology Advances on Policing and Crime continues.

With respectful dedication to the Police Authorities and Departments the world over, and then the  Students, Professors and Teachers of the world. See Ya all ''register'' on The World Students Society and Twitter - !E-WOW! - the Ecosystem 2011:

''' Students & Justice '''

Good Night and God Bless

SAM Daily Times - the Voice of the Voiceless

0 comments:

Post a Comment

Grace A Comment!