How is the drone going to determine that it is being used to commit a crime?
How is the drone going to determine that it is being used to commit a crime?
Calling an unspecified gender person anything other than “they” was until recently considered to be incorrect. “They” is plural but now is used to refer to singlar persons because writing “he or she” everywhere is too much. Calling a user “he” does not imply that users are male or can only be male. Not using “they” or “he/she” or obscure gender neutral pronouns does not make something inherently transphobic. Closing PRs that unnecessarily change pronouns as spam is not inherently transphobic, but the accompanying comment is not very inclusive.
The post talks about “white suppremacist language,” but the proposed change did not remove white suppremacist language. It was just a generic anti “woke” message, possibly motivated by people brigading after the original PR to change “he” to “they.” White suppremacists may use also use similar language, but you can’t just pick things that a white suppremacist has done and decide that anyone else who does the same is a white suppremacist. He’s not blameless, but people are intentionally provoking the developer and exagerating the responses for drama.
Built bundles are not affected. The service is supposed to figure out which polyfills are required by a particular browser and serve different scripts. Because it’s serving different scripts, the scripts cannot be bundled or secured using SRI. That would defeat the purpose of the service.
Code pulled from GitHub or NPM can be audited and it behaves consistently after it has been copied. If the code has a high reputation and gets incorporated into bundles, the code in the bundles doesn’t change. If the project becomes malicious, only recently created bundles are affected. This code is pulled from polyfill.io every time somebody visits the page and recently polyfill.io has been hijacked to sometimes send malicious code instead. Websites that have been up for years can be affected by this.
Docker Swarm encryption doesn’t work for your use case. The documentation says that the secret is stored encrypted but can be decrypted by the swarm manager nodes and nodes running services that use the service, which both apply to your single node. If you’re not having to unlock Docker Compose on startup, that means that the encrypted value and the decryption key live next to each other on the same computer and anyone who has access to the encrypted secrets can also decrypt them.
China is simultaneously destroying the environment for profit and investing too much money in green technology?
A distinctive feature of purchase subsidies for BEV in China, however, is that they are paid out directly to manufacturers rather than consumers and that they are paid only for electric vehicles produced in China, thereby discriminating against imported cars.
That’s an interesting way to spin subsidies on the production of electric vehicles. Why would China pay companies in other countries to produce cars?
I looked it up before posting. It’s illegal in 48 states, including California where most of these companies are headquartered, and every state where major cloud data centers are located. This makes it effectively illegal by state laws, which is the worst kind of illegal in the United States when operating a service at a national level because every state will have slightly different laws. No company is going to establish a system that allows users in the two remaining states to exchange revenge porn with each other except maybe a website established solely for that purpose. Certainly Snapchat would not.
I’ve noticed recently there are many reactionary laws to make illegal specific things that are already illegal or should already be illegal because of a more general law. We’d be much better off with a federal standardization of revenge porn laws than a federal law that specifically outlaws essentially the same thing but only when a specific technology is involved.
Web services and AI in general are completely different things. Web services that generate AI content want to avoid scandals so they’re constantly blocking things that may be in some situations inappropriate, to the point where those services are incapable of performing a great many legitimate tasks.
Somebody running their own image generator on their own computer using the same technology is limited only by their own morals. They can train the generator on content that public services would not, and they are not constrained by prompt or output filters.
Modern AI is not capable of this. The accuracy for detecting nsfw content is not good, and they are completely incapable of detecting when nsfw content is allowable because they have no morals and they don’t understand anything about people or situations besides appearance.
“This kid who is not getting any kind of real consequence other than a little bit of probation, and then when he’s 18, his record will be expunged, and he’ll go on with life, and no one will ever really know what happened,” McAdams told CNN.
“If [this law] had been in place at that point, those pictures would have been taken down within 48 hours, and he could be looking at three years in jail…so he would get a punishment for what he actually did,” McAdams told CNN.
There’s a reason kids are tried as kids and their records are expunged when they become adults. Undoing that will just ruin lives without lessening occurrences.
“It’s still so scary as these images are off Snapchat, but that does not mean that they are not on students’ phones, and every day I’ve had to live with the fear of these photos getting brought up resurfacing,” Berry said. “By this bill getting passed, I will no longer have to live in fear knowing that whoever does bring these images up will be punished.”
This week, Republican Senator Ted Cruz, Democratic Senator Amy Klobuchar and several colleagues co-sponsored a bill that would require social media companies to take down deep-fake pornography within two days of getting a report.
“[The bill] puts a legal obligation on the big tech companies to take it down, to remove the images when the victim or the victim’s family asks for it,” Cruz said. “Elliston’s Mom went to Snapchat over and over and over again, and Snapchat just said, ‘Go jump in a lake.’ They just ignored them for eight months.”
BS
It’s been possible for decades for people to share embarrassing pictures of you, real or fake, on the internet. Deep fake technology is only really necessary for video.
Real or fake pornography including unwilling participants (revenge porn) is already illegal and already taken down, and because the girl is underage it’s extra illegal.
Besides the legal aspect, the content described in the article, which may be an exaggeration of the actual content, is clearly in violation of Snapchat’s rules and would have been taken down:
- We prohibit any activity that involves sexual exploitation or abuse of a minor, including sharing child sexual exploitation or abuse imagery, grooming, or sexual extortion (sextortion), or the sexualization of children. We report all identified instances of child sexual exploitation to authorities, including attempts to engage in such conduct. Never post, save, send, forward, distribute, or ask for nude or sexually explicit content involving anyone under the age of 18 (this includes sending or saving such images of yourself).
- We prohibit promoting, distributing, or sharing pornographic content, as well as commercial activities that relate to pornography or sexual interactions (whether online or offline).
- We prohibit bullying or harassment of any kind. This extends to all forms of sexual harassment, including sending unwanted sexually explicit, suggestive, or nude images to other users. If someone blocks you, you may not contact them from another Snapchat account.
Formerly in business website formerly known as Twitter.
The headline says “digital freelancers,” so maybe it’s talking primarily about small jobs that were being outsourced. A 21% decrease in regular job listings would be more concerning because of the amount of incorrect information and buggy software about to be created than job loss.
Aqara sells one that works with HomeKit and should work offline. They say it will get Matter support later, but Home Assistant can use it through HomeKit without having to buy any Apple devices.
The Index by itself is 500 dollars, not 1k.
LCD screen was a feature of the Index over the OLED screen in the Vive. On the Vive, the OLED has a visible pattern and some of the image is lost because there aren’t an even number of red green and blue subpixels (similar to PSVR2). The Beyond is screen is micro OLED with a more regular subpixel pattern.
PSVR might be the only headset available with these features for cheaper, but not much cheaper, and it doesn’t have the headphones.
ChromeOS and ChromiumOS are Linux.
The problem with ChromeOS (and Android) devices is that hardware support is usually only available in a fork of Linux which gets as little maintenance as possible for the five years. You end up with the choice of running and old kernel that supports the hardware but not some new software, a new kernel that supports new software but the hardware doesn’t work right, or taking over maintenance of the fork yourself. The same problem occurs with uncommon hardware on non-ChromeOS devices.
Be careful with doing this. X-Real-IP and X-Forwarded-For are good for when the client is a trusted proxy, but can be easily faked if you don’t whitelist who’s allowed to use those headers. Somebody with IPv6 access could send “X-Real-IP: 127.0.0.1” or something and if the server believes it then you’ll see 127.0.0.1 in logs and depending on what you’re running the user may gain special permissions.
Also be careful with the opposite problem. If your server doesn’t trust the proxy, it will show the VPS IP in logs, and if you’re running something like fail2ban you’ll end up blocking your VPS and then nobody will be able to connect over IPv4.
The five year policy is for ChromeOS, not ChromiumOS. ChromiumOS-based devices may have more or less support.
If all you want is to break out the VLANs to NICs using a Linux PC instead of a managed switch, create six bridge interfaces and put in each bridge the VLAN interface and the NIC.
There’s a lot of wrong advice about this subject on this post. Forgejo, and any other Git forge server, have a completely different security model than regular SSH. All authenticated users run with the same PID and are restricted to accessing Git commands. It uses the secure shell protocol but it is not a shell. The threat model is different. Anybody can sign up for a GitHub or Codeberg account and they will be granted SSH access, but that access only allows them to push and pull Git data according to their account permissions.
The link is broken, but this is apparently an issue with Signal Desktop, not regular Signal. The proposed solution does not work on Windows: https://www.electronjs.org/docs/latest/api/safe-storage
It’s unfortunately about the best you can do on Windows.