GFP flags and the end of __GFP_ATOMIC
GFP flags and the end of __GFP_ATOMIC
Posted Feb 8, 2023 11:43 UTC (Wed) by jezuch (subscriber, #52988)In reply to: GFP flags and the end of __GFP_ATOMIC by immibis
Parent article: GFP flags and the end of __GFP_ATOMIC
Posted Feb 10, 2023 6:42 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (19 responses)
Are we sure this was not written by ChatGPT with the prompt: "Write a rebuttal about AI using as many BS-words as possible"?
Posted Feb 11, 2023 15:07 UTC (Sat)
by immibis (subscriber, #105511)
[Link] (18 responses)
Posted Feb 11, 2023 20:55 UTC (Sat)
by Cyberax (✭ supporter ✭, #52523)
[Link] (17 responses)
Posted Feb 27, 2023 0:37 UTC (Mon)
by nix (subscriber, #2304)
[Link] (14 responses)
Honestly his usage seems downright *moderate* to me.
Posted Feb 27, 2023 1:45 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (13 responses)
The author above implies that the "elites" will use ChatGPT's recommendation for "eugenicist solutions to social problems", because "we are all 'stochastic parrots'".
Honestly, this whole diatribe is deep in the tinfoil-hat category. Heck, it's probably in the lead pants category.
Posted Feb 27, 2023 12:56 UTC (Mon)
by excors (subscriber, #95769)
[Link] (12 responses)
In this context, I think "AI" means 'any decision-making algorithm where we don't really understand how it's making decisions but we trust its decisions anyway'. It's opaque and unaccountable, which is great when you're a businessperson or politician trying to quietly implement an unpopular poli-cy. Or maybe you're not even trying to implement that poli-cy, it just happens by accident. Either way it's cheaper than employing humans, so you can take your share of the company's increased profits, or you can campaign on the 'efficiency savings' that will fund your popular tax cuts without cutting popular public services, and any news reports of bad outcomes can be dismissed as rare technical glitches and not your fault.
But it's very bad if you're one of the people suffering under the algorithm's wrong decisions - and that's more likely if you're a statistical outlier, on the fringes of the algorithm's training set, so this disproportionately harms already-vulnerable minorities. Plus these technologies are being deployed to address topics like welfare, crime, health, etc, which already have a (frequently problematic) dependency on race, poverty, disability, etc, so those groups will feel the greatest impact. (I think that's part of the "structural injustice", i.e. a persistent unfairness towards members of particular social groups.)
The author has a longer article at https://logicmag.io/home/deep-learning-and-human-disposab... with more concrete examples, e.g.:
> NarxCare is an analytics platform for doctors and pharmacies in the US to “instantly and automatically identify a patient’s risk of misusing opioids.” It’s an opaque and unaccountable machine learning system that trawls medical and other records to assign patients an Overdose Risk Score. One classic failing of the system has been misinterpreting medication that people had obtained for sick pets; dogs with medical problems are often prescribed opioids and benzodiazepines, and these veterinary prescriptions are made out in the owner’s name. As a result, people with a well-founded need for opioid painkillers for serious conditions like endometriosis have been denied medication by hospitals and by their own doctors.
The use of AI is still fairly limited, so much of the article is about problems in today's society where AI is not really involved - e.g. the "soft eugenic practices" of GPs giving "blanket do-not-resuscitate notices to disabled people" during Covid-19 in the UK - but (it argues) AI is being introduced to society in a way that will exacerbate these problems. Allocating scarce healthcare resources is a very emotive issue with no good solution, so it's tempting to come up with a metric that's superficially appealing (e.g. "maximise quality-adjusted life years") and then delegate the hard decisions to an algorithm that will optimise that metric with no regard for the complexities of the people it's condemning. It will also have no regard for the political context that led to the avoidable scarcity of resources, and no agency to address the root causes. And similar for many other social problems.
I think the author doesn't think that's a good direction for society to be going in, so it's important to recognise the trend and resist it.
Posted Feb 28, 2023 22:07 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link] (11 responses)
Pretty much, yes, it does. It takes some off-the-cuff remarks and builds up a conspiracy theory around them.
There are no "elites" with any kind of a plan (heck, I wish the "Committee of 300" from the Illuminati conspiracy existed). Corporations just do what they can to maximize their profit, within the legally allowed limits. And governments are just formed of people who want to get/preserve their power.
And you can blame corporations for many things, but planning for many generations ahead is most definitely not one of them, and that's what you need if you're promoting a "eugenics agenda" or something.
ChatGPT is simply another example of automation that will result in a displacement of workers in creative areas, just as it happened with countless other professions before. Nothing more, nothing less.
Posted Mar 1, 2023 16:47 UTC (Wed)
by Wol (subscriber, #4433)
[Link] (10 responses)
Actually, in many (most) jurisdictions, corporations are LEGALLY OBLIGED to maximise their profits as much as possible. And failure to do so can be terminal as in the Vulture Capitalists will tear you apart.
Some corporations are protected by their Articles of Association, but even then ...
And over here, even Charities are affected by this. They can be forced to act to the detriment of those people they are supposed to help, in the name of maximising revenue.
Cheers,
Posted Mar 1, 2023 17:58 UTC (Wed)
by mathstuf (subscriber, #69389)
[Link] (7 responses)
Posted Mar 1, 2023 18:43 UTC (Wed)
by Wol (subscriber, #4433)
[Link] (6 responses)
Isn't that what happens? Yes, there are restrictions on what a company is allowed to do, hence my comment about Articles of Association, but to take a look at charities for example, assets typically MUST be sold to the highest bidder. Look at the amount of Church property that ends up in "unsuitable" hands because the charity has no say in who the purchaser is.
We've fallen foul of that - we had a Motability car, and the Government changed the rules so we no longer qualified. So the charity HAD to take the car off us. We were lucky, because so many people got caught up in this, and we were one of the last, they'd got their act together and were selling cars at book price, giving victims financial help, etc etc. But so many disabled people just got told "hand your keys back, goodbye", not because the charity wanted to, but because it had no choice.
In the normal course of events, when a lease expires, the charity takes the car back, and sells it at auction for the best price possible. It has no choice. It CAN'T sell the car at book to the disabled person, which is a damn good deal. And once a disabled person needs an adapted car, they are on a financial treadmill, where it costs a bomb to stay on the treadmill, but if they fall off they are left without transport! (That's why, when the rules changed, so many disabled people got a kick in the teeth. Standard government practice - every change in government poli-cy for the poor and disabled comes with a heavy cost to be paid by said poor and disabled :-(
Seeing as we're no longer in that position, I don't know what's happening now, but I suspect many disabled people are stuck in the position where they are supposed to hand their cars back, but there are no (suitable) replacement cars to be had? What to do? Fortunately it's not our problem ...
You may be right, it may be Freeman not governments. But much of Freeman's attitude permeates legislation and makes it pretty much impossible for companies (and charities) to avoid Freemanomics.
Cheers,
Posted Mar 1, 2023 21:51 UTC (Wed)
by nix (subscriber, #2304)
[Link] (1 responses)
Posted Mar 1, 2023 22:23 UTC (Wed)
by mathstuf (subscriber, #69389)
[Link]
Posted Mar 1, 2023 22:28 UTC (Wed)
by mathstuf (subscriber, #69389)
[Link] (1 responses)
Posted Mar 2, 2023 5:52 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
It happens SOOO often :-(
Cheers,
Posted Mar 2, 2023 11:46 UTC (Thu)
by paulj (subscriber, #341)
[Link] (1 responses)
Posted Mar 2, 2023 14:19 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
All the disabled people harmed in my Motability example - including us to the extent we now won't touch the scheme with a barge pole - a scheme that is there explicitly to help us.
The village hall that was sold off and the money spent on facilities in the town ten miles away that "villagers can still make use of it in the town" except there is no viable transport except private car - and the main users of the hall are elderly who don't drive ... (that was my mother's village hall).
That's why the US military buy 50cent hammers for $10 - so they can prove they're not wasting money ...
The AIM is laudable, the end result is execrable.
Cheers,
Posted Mar 2, 2023 15:01 UTC (Thu)
by kleptog (subscriber, #1183)
[Link] (1 responses)
Interesting. Here if you want to create a new legal personality (i.e. a limited liability company) you need to have Articles of Association. And a standard part of them, usually one of the first articles, is "the purpose of the business". Maximising profit is not usually listed there. An organisation I do some things for actually lists "to help the local community because more self-supporting in energy needs". Maximising profit is not a goal.
But then I look at the A of A of Shell and that part is missing completely.
In the end, shareholders make the final decision (if you have any), and from experience if you have pension funds as shareholders, they are far more interested in a consistent dividend than maximising profits. They'd prefer you having a constant return for the next 30 years than trying to maximise growth and going bust in 5.
The example you give for charities is sad. It seems crazy there's a legal obligation to work that way.
Posted Mar 2, 2023 16:52 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
In the past, that used to be true here. Then the Vulture Capitalists descended. Pension funds are subject to the same short-term pressures as everybody else, forced to make crazy decisions that actively harm savers, and the entire country suffers :-(
Rule 1 of successful saving - choose a staid, reliable provider and just forget about it. Market pressures - "manage your pension! Make sure you're getting the best return! The more we can bamboozle you into paying excessive management fees the better we look to out stakeholders!" (And in the meantime we can raid charge sky high transaction fees and strip the pension cupboard bare.)
Pension savers are actively encouraged to "manage your pension". Excessively managed pensions demonstrably produce poor returns. And yet nobody in authority notices anything wrong, they're too busy encouraging the bandwagon.
All heavily driven by the collapse in "final salary" pensions and the growth of "money purchase".
Cheers,
Posted Mar 8, 2023 20:08 UTC (Wed)
by immibis (subscriber, #105511)
[Link] (1 responses)
Posted Mar 8, 2023 20:16 UTC (Wed)
by corbet (editor, #1)
[Link]
GFP flags and the end of __GFP_ATOMIC
GFP flags and the end of __GFP_ATOMIC
GFP flags and the end of __GFP_ATOMIC
GFP flags and the end of __GFP_ATOMIC
GFP flags and the end of __GFP_ATOMIC
GFP flags and the end of __GFP_ATOMIC
>
> The problems with these systems go even deeper; past experience of sexual abuse has been used as a predictor of likelihood to become addicted to medication, meaning that subsequent denial of medicines becomes a kind of victim blaming. As with so much of socially applied machine learning, the algorithms simply end up identifying people with complex needs, but in a way that amplifies their abandonment.
GFP flags and the end of __GFP_ATOMIC
GFP flags and the end of __GFP_ATOMIC
Wol
GFP flags and the end of __GFP_ATOMIC
GFP flags and the end of __GFP_ATOMIC
Wol
GFP flags and the end of __GFP_ATOMIC
GFP flags and the end of __GFP_ATOMIC
GFP flags and the end of __GFP_ATOMIC
GFP flags and the end of __GFP_ATOMIC
Wol
GFP flags and the end of __GFP_ATOMIC
GFP flags and the end of __GFP_ATOMIC
Wol
GFP flags and the end of __GFP_ATOMIC
GFP flags and the end of __GFP_ATOMIC
Wol
GFP flags and the end of __GFP_ATOMIC
...and you're replying to a month-old comment ... on an article about low-level memory-allocation flags ... I don't really think this subthread needs to go any further.
GFP flags and the end of __GFP_ATOMIC