Categories
A-levels Algorithm

A-levels: Algorithm at centre of grading crisis ‘unlawful’ says Labour – BBC News

Student working at a computer

Image copyright
Getty Images

The algorithm used to downgrade thousands of A-level results in England was “unlawful”, Labour have claimed.

The computer-based model used by Ofqual to standardise results after exams were cancelled breached anti-discrimination legislation as well as laws requiring it to uphold standards, Labour says.

The party wants Gavin Williamson to publish the legal advice he was given.

The education secretary has backed the regulator but apologised for the hurt caused to pupils by the chaos.

Labour are calling for A-level pupils in England to be given a “cast-iron guarantee” they will not lose out on their first choice university place next month or in the future.

Mr Williamson, who is facing calls from students and opposition MPs to resign, has urged universities to show flexibility after Monday’s results U-turn threw September’s admission process into further confusion.

Thousands of pupils remain uncertain about which university they will end up at after Ofqual said centre and school-assessed grades (CAG) would be accepted following a furore over its process for calculating them.

The regulator has been severely criticised for using an algorithm to “moderate” the grades submitted by schools, giving substantial weight to schools’ past performance as well as other factors.

This resulted in nearly 40% of marks being downgraded, in some cases by more than one grade, with high-achieving pupils from schools in deprived areas being disproportionately affected.

‘No proper assessment’

Labour said there had been “no proper assessment” of this year’s cohort of pupils because the process used by Ofqual did “not accurately reflect” their level of knowledge, skill and understanding.

As a result, their results could not be “properly compared” with those of previous years or other exam boards, meaning the regulator was in breach of its legal obligation to uphold assessment and qualification standards.

Image copyright
PA Media

In a letter to Mr Williamson and Ofqual’s chief executive Sally Collier, Labour said the weight given to past results from individual schools had caused “a mass of discriminatory impacts”.

This, it said, was “bound to disadvantage a whole range of groups with protected characteristics, in breach of a range of anti-discrimination legislation”. It said Ofqual’s policy of not allowing any right of appeal “beyond errors of application in the system” was also unlawful.

The opposition are pressing Mr Williamson to make clear when he was first informed about concerns about the algorithm and what legal advice he received before approving its use.

“Ofqual and the Secretary of State have been fully in the knowledge that the standardisation formula that was being used was unlawful,” it said.

“It is regrettable that only when threatened with legal action that the government finally conceded to do what Labour have been calling for; for grades to be allocated based on CAGs.”

‘Right wrongs’

The decision to allow students to use the grades estimated by their teachers – or stick to the grades provided by the algorithm if they were higher – followed similar decisions in Scotland, Northern Ireland and Wales.

Labour is seeking assurances students who received offers from universities at clearing will not now lose them.

Several institutions have said they will honour all offers made to students before and immediately after the original results were announced but many students have said their places have since been withdrawn.

Media playback is unsupported on your device

Media captionGavin Williamson said it was “the right thing to act” after results came out

Labour said this was unfair and ministers needed to “right this wrong” immediately.

It said all pupils must have their final grades confirmed by the end of the week and no-one should lose out on their first choice place “because of government incompetence”.

It is calling on ministers to “bend over backwards” to support students, including by helping universities to fund additional places needed to meet the demand.

“This fiasco is far from over,” Shadow Communities Secretary Steve Reed told the BBC. “There are many, many students that are still uncertain about whether they can go to university or which university they can go to.

“Every student that hasn’t got their firm grades given to them needs to have them by the end of the week so they can start to make decisions about their future.”

Students are being urged to contact their universities as soon as possible to discuss the options.

The government has lifted its cap on the numbers each institution can admit but some universities are warning of potential financial ruin if students switch to other institutions in huge numbers.

Meanwhile, Durham University has promised a bursary and guarantee of accommodation for everyone who defers their place until 2021.

Read More

Categories
Algorithm Predict

Can an Algorithm Predict the Pandemic’s Next Moves? – The New York Times

Researchers have developed a model that uses social-media and search data to forecast outbreaks of Covid-19 well before they occur.

Credit…Tony Luong for The New York Times

Benedict Carey

Judging when to tighten, or loosen, the local economy has become the world’s most consequential guessing game, and each policymaker has his or her own instincts and benchmarks. The point when hospitals reach 70 percent capacity is a red flag, for instance; so are upticks in coronavirus case counts and deaths.

But as the governors of states like Florida, California and Texas have learned in recent days, such benchmarks make for a poor alarm system. Once the coronavirus finds an opening in the population, it gains a two-week head start on health officials, circulating and multiplying swiftly before its re-emergence becomes apparent at hospitals, testing clinics and elsewhere.

Now, an international team of scientists has developed a model — or, at minimum, the template for a model — that could predict outbreaks about two weeks before they occur, in time to put effective containment measures in place.

In a paper posted on Thursday on arXiv.org, the team, led by Mauricio Santillana and Nicole Kogan of Harvard, presented an algorithm that registered danger 14 days or more before case counts begin to increase. The system uses real-time monitoring of Twitter, Google searches and mobility data from smartphones, among other data streams.

The algorithm, the researchers write, could function “as a thermostat, in a cooling or heating system, to guide intermittent activation or relaxation of public health interventions” — that is, a smoother, safer reopening.

“In most infectious-disease modeling, you project different scenarios based on assumptions made up front,” said Dr. Santillana, director of the Machine Intelligence Lab at Boston Children’s Hospital and an assistant professor of pediatrics and epidemiology at Harvard. “What we’re doing here is observing, without making assumptions. The difference is that our methods are responsive to immediate changes in behavior and we can incorporate those.”

Outside experts who were shown the new analysis, which has not yet been peer reviewed, said it demonstrated the increasing value of real-time data, like social media, in improving existing models.

The study shows “that alternative, next-gen data sources may provide early signals of rising Covid-19 prevalence,” said Lauren Ancel Meyers, a biologist and statistician at the University of Texas, Austin. “Particularly if confirmed case counts are lagged by delays in seeking treatment and obtaining test results.”

The use of real-time data analysis to gauge disease progression goes back at least to 2008, when engineers at Google began estimating doctor visits for the flu by tracking search trends for words like “feeling exhausted,” “joints aching,” “Tamiflu dosage” and many others.

The Google Flu Trends algorithm, as it is known, performed poorly. For instance, it continually overestimated doctor visits, later evaluations found, because of limitations of the data and the influence of outside factors such as media attention, which can drive up searches that are unrelated to actual illness.

Since then, researchers have made multiple adjustments to this approach, combining Google searches with other kinds of data. Teams at Carnegie-Mellon University, University College London and the University of Texas, among others, have models incorporating some real-time data analysis.

“We know that no single data stream is useful in isolation,” said Madhav Marathe, a computer scientist at the University of Virginia. “The contribution of this new paper is that they have a good, wide variety of streams.”

In the new paper, the team analyzed real-time data from four sources, in addition to Google: Covid-related Twitter posts, geotagged for location; doctors’ searches on a physician platform called UpToDate; anonymous mobility data from smartphones; and readings from the Kinsa Smart Thermometer, which uploads to an app. It integrated those data streams with a sophisticated prediction model developed at Northeastern University, based on how people move and interact in communities.

The team tested the predictive value of trends in the data stream by looking at how each correlated with case counts and deaths over March and April, in each state.

In New York, for instance, a sharp uptrend in Covid-related Twitter posts began more than a week before case counts exploded in mid-March; relevant Google searches and Kinsa measures spiked several days beforehand.

The team combined all its data sources, in effect weighting each according to how strongly it was correlated to a coming increase in cases. This “harmonized” algorithm anticipated outbreaks by 21 days, on average, the researchers found.

Looking ahead, it predicts that Nebraska and New Hampshire are likely to see cases increase in the coming weeks if no further measures are taken, despite case counts being currently flat.

“I think we can expect to see at least a week or more of advanced warning, conservatively, taking into account that the epidemic is continually changing,” Dr. Santillana said. His co-authors included scientists from the University of Maryland, Baltimore County; Stanford University; and the University of Salzburg, as well as Northeastern.

He added: “And we don’t see this data as replacing traditional surveillance but confirming it. It’s the kind of information that can enable decision makers to say, ‘Let’s not wait one more week, let’s act now.’”

For all its appeal, big-data analytics cannot anticipate sudden changes in mass behavior any better than other, traditional models can, experts said. There is no algorithm that might have predicted the nationwide protests in the wake of George Floyd’s killing, for instance — mass gatherings that may have seeded new outbreaks, despite precautions taken by protesters.

Social media and search engines also can become less sensitive with time; the more familiar with a pathogen people become, the less they will search with selected key words.

Public health agencies like the Centers for Disease Control and Prevention, which also consults real-time data from social media and other sources, have not made such algorithms central to their forecasts.

“This is extremely valuable data for us to have,” said Shweta Bansal, a biologist at Georgetown University. “But I wouldn’t want to go into the forecasting business on this; the harm that can be done is quite severe. We need to see such models verified and validated over time.”

Given the persistent and repeating challenges of the coronavirus and the inadequacy of the current public health infrastructure, that seems likely to happen, most experts said. There is an urgent need, and there is no lack of data.

“What we’ve looked at is what we think are the best available data streams,” Dr. Santillana said. “We’d be eager to see what Amazon could give us, or Netflix.”

[Like the Science Times page on Facebook. | Sign up for the Science Times newsletter.]

Read More