Carleas wrote:Karpel Tunnel wrote:It seems to me the people who want the active case are using the people performing the passive case. They are collaborating.
Sure, but the Active Case is often justified, and where it is we want it to use the best available information from the Passive Case.
If the government is good. One idea in the US is that the laws need to reflect the always present possibility that one day it will not be, that this has tended to happen, and so we hamper even good governments, so that should a bad one come in - not a poor president and majority in the house, say, but an actually shift toward fascism say - that government will be hindered. Private agents are also hindered. I don't think the use of private information is legal as it is currenly practiced by Google, fb, etc.
And either way, the existence of the Active Case, even where it piggybacks on the Passive Case, does not turn all passive data collection into active surveillance.
It gives government and private companies way too much power. And this power is known and this knowing has effects. And it is abused, with regularity. Anything from personal vendettas by staff in private and public positions, to making available data to third parties, without the consent of individuals, with no promise of security on those sharings, to not defending well enough to hacking, to intentionally providing third parties with information that may or even will be used in ways that are not fair to or even are dangerous to the private person's the data relates to.
Sure, but by the same coin it need only be used well once, if the scale is big enough, to be an absolute bonanza. We need to either compare actual positive use to actual negative use, or hypothetical best case scenario to hypothetical worst case scenario, or some probability-weighted version of both, to evaluate the net impact and expected effect.
We have a history of keeping government from having ways to shift easily to the control of the population and leave democracy behind. We tend to risk quite a bit to keept his from happening.
The effects I described are from the very proposals that savvy privacy experts have put forward. I don't mean that these are the intended consequences, but rather they are how the regulations function in the real world. Right To Be Forgotten has been abused by the powerful to cover up crimes and embarrassments; GDPR has burdened small companies to the benefit of the big players that were meant to be brought to heel. Savvy privacy people are very good at coming up with regulations, but they tend to ignore unintended consequences, trade-offs, opportunity costs, and the economics of what they advocate.
It is very hard to control outcomes, which is part of why I am wary of giving governments and enormous corporations this much power. Precisely the argument you are making here, adds to the caution we should feel about, well, it'll work out, survellance will generally be used where it should be. That said, I think there are people making better proposals than you are noticing and the reason they are not being put in is because they will not just benefit the rich.
It may not be impossible to balance these things out, but it is very, very difficult to make a law that prohibits people from talking about other people and doesn't open itself to abuse.
You could demand, at a minimum that the company, for example, must inform out about what kinds of information they are sharing with third parties and who the third parties are. I think if you sell someone an online vaccuum cleaner and that VC is going to measure you apartment, record things, give a customer profile of you to third parties, you should know who these parties are, and then be able to see what the quality of their security of your data is. A lot of the claims these companies make is that the data is anonymous, but studies have shown that in fact hackers,t he companies, governments can figure out who the data belongs to up into the 90% level of accuracy. And these third parties have no obligation right now to take adequate security around that data. We've had tvs that actually record people in their apartments. 40 years ago this kind of thing would have just been assumed to get a warning label on a product. Now they don't even need to inform us, let alone insure that what they are selling will be guarded appropriately.
With the internet of things, which in many ways is already here, we don't even have to be customers of a company to be swept up in monitoring, enough monitoring to be a danger to battered wives, dissidents, people doing things their employers might dislike but which are legal, and that a rogue government can do all sorts of things with. And which the US government has done around political dissidents, minority groups, civil rights advocates and more with regularity but never with tools like there. Now they have AIs culling the data, now you can be followed without court order all day long.
Karpel Tunnel wrote:We should be breaking the private giants up, regulating them, taking away their corporate charters when they break laws repeatedly, which some or perhaps all of the giants have done.
These seem like separate questions. We can agree that we should break up companies whose market power distorts the economy in harmful ways, regulate them to forbid selfish and antisocial behavior, and back those regulations up with whatever consequences are most effective (and have the least collateral damage). But that does not tell us about the size at which companies become harmful nor which behaviors need to be regulated (nor, for that matter, whether revoking a corporate charter is a better consequence all told than e.g. criminally prosecuting the human beings making the ostensibly criminal decisions).
[/quote]Of course, but since a corporate charter is a priviledge, I think any company that repeatedly avoids legal regulation or commits crimes, should lose their charter. What we have now is companies like fb and google that weild unbelievalbe power and ignore governments repeatedly. They consider themselves above the law, and even when agreeing to comply, find ways to get around it. Again, I recommend Surveillance Capitalism. I don't think the public, even the well educated public, has any idea what these companies do and do not do. And what is already happening. There is not the slightest informed consent about these issues. Very few people realize how much information is being shared about them, to whom, how little obligation the companies have to protect that information, how it has already been abused, and how much these companies have repeatedly broken laws, ignored legislation and oversight - and given their enormous power, this is attempts at oversight that have gotten past the antidemocratic lobbying practices and campaign finance control these behemoths used to nip enforcement of legislation and oversight in the bud.
At the very least we should know what is happening, where the information goes and what obligations the parties have in relation to it.
But the vast majority of the people do not know, in the least.
And they would care.