• Macy's and Sunglass Hut.

    Raped him with a shank pressed against his neck when he went to the bathroom.

     
  • Sports Illustrated called out for articles by fake writers with AI profile photos - YouTube
    Part of the reason we want people, not AI, in positions of responsibiltiy like journalism is because work is part of improvement, which results in a more proficient person. Ultimate decisions (such as will this thing be OK or harmful to us) are made by (influenced by, informed by, taught by) wise people. If people do not become professionals, there will be no experienced, wise people in these roles.

    #AI

    Judge Who Signed Newspaper Warrant Won't Be Disciplined - YouTube
    ‘The evidence did not support the search warrant.’ Instead of disciplining the magistrate, they just said ‘next time slow down.' First Amendment and journalists. Journalists' home, home of a council woman raided, alleging ‘identity theft. In the raid of the news office, police took computers and cell phones and took documents that revealed confidential sources for stories unrelated to the investigation. ‘Police weaponizing search warrants against journalists’ requiring police to get a subpoeina instead of a search warrant for journalists, in order to have the courts oversee what's happening when they do something with journalists. This way, journalists are brought into court and told to answer questions, rather than just kick down the door and take everything, which compromises journalism, and also looks like they're trying to shut down the paper/journalist.

    The town tried to shut down the newspaper. Other newspapers nearby stepped up and said ‘we’ll help you get your editions out.'

    Those reviewing whether the magistrate should be disciplined decided no because he hand't been found ‘incompetant.’ Their standard for that. ‘Just don’t cross the line of incompetance and you're ok.'

    Commenter: 'If signing the warrant wasn't incompetent we can only assume it was malicious.' Another: ‘ Sufficiently advanced incompetence is indistinguishable from malice.’ Another: ' Wait signing a warrant that is ILLEGAL and violates federal law was not incompetence? ok then it was criminal because they knew what they were signing.'

  • AI companies that will be valuable will be those that have a valuable dataset (the AI itself less so).

    Bigger, existing companies who already understand well their domain, will be advantaged compared with smaller startup companies. The enterprise will eat their lunch.

    These datasets, the companies didn't appraise them as highly before this tool came to show how well it could use these big datasets.

    People are now locking down their datasets (Google Analytics?). Before, they would make them public, allow Google Search to use them because that meant traffic for them.

    So far, even according to the Databricks CEO, chatbots seem the #1 use. Also analyzing customer data (medical records, ‘anonymously’) to find patterns. In insurance, there a long piles of papers to sign, but how does that apply to a particular case, and that can be asked. Also in finding sentiment about a product.


  • Dalle-2 search frequency


     

  • Attention and brain waves.
  • A Google engineer stated that Lamda (chatbot) was sentient

    It was in headlines all over. Google I think put him on suspension.

    It didn't seem from the chat that Lamda WAS sentient to me. He went along with the prompts of the engineer and said he was sentient and elaborated. Mental Outlaw pointed out that the engineer didn't challenge the chatbot on the question, and supposed that if the engineer had been like, "You're not sentient you're an AI" the chatbot would have gone along with that and agreed (not that a chatbot would definitely even know if it WAS sentient or not, and not that a chatbot can't be sentient just because it's also a bot).

    The question, though, is if AI is sentient, must we then treat it differently, ie as we would any sentient being? How can we know if an AI IS sentient?

     
  • Microsoft Research guy commented on how if there were a breakthrough in privacy-preserving tech, there would be more use of AI

    Applications of AI to things like the huge datasets of medical records are bottlenecked by privacy issues.

    Lots of old research that was done, it has since been found that although no one knew it at the time, current tech can see that a person in one research group was the same person as in a different research group.

    Didn't know it at the time either, but scans of eyes can be used now to predict with some accuracy various thigns:


  • DALL-E makes VW Beetles similar to the Beetle

    "Is there something about the design of the Beetle that even variations look similar?" asked Bakz T. Future.

     
  • DALL-E


    A side-effect of all the DALL-2 posts is you can see who on Twitter works at @OpenAI

     
  • Pentagon's first chief software officer resigned last month saying China will dominate US in AI and bioengineering tech

    Nicholas Chaillan, age 37. He said he thought it was already a done deal and that the US would have no competitive chance in 15-20 years.

    He said many government departments in the US were run by people who weren't really experts in that field. He also criticized Google-like tech giants for not wanting to cooperate with the USgov over ethics issues.

    US SoD wants a $1.5b investment to develop AI faster.



  • AI is second-biggest threat to civilization, said Elon Musk, arguably the world's biggest robot maker

    We should have a regulatory agency to oversee AI safety, he said, but there isn't anything like that right now and that type of thing takes governments years to do.

    He said he didn't really know what to do about it.

    (His biggest threat was population collapse.)

     
  • Daniel Hale awarded Sam Adams for drone info

    Of 200 people killed in a 1-year period in 2012-2013 US special forces airstrikes (using drone) only 35 were the intended targets.

    The innocent civilians were routinely categorized as 'enemies killed in action.'

    Hale was a defense contractor in 2013 when his conscience caused him to release classified documents to the press. Hale was charged under the Espionage Act and received 45 months.

    In a hand-written letter to Judge Liam O’Grady Hale explained that the drone attacks and the war in Afghanistan had “little to do with preventing terror from coming into the United States and a lot more to do with protecting the profits of weapons manufacturers and so-called defense contractors.”

    Hale also cited a 1995 statement by former U.S. Navy Admiral Gene LaRocque: “We now kill people without ever seeing them. Now you push a button thousands of miles away … since it’s all done by remote control, there’s no remorse … and then we come home in triumph.”

    Sam Adams Associates for Integrity in Intelligence  
  • EU border wall, sound weapons, AI lie detection

    In order to keep out migrants several EU countries are building border walls (nevermind their negative response to the 2017 Trump proposal), employing sound cannons, and working on an AI lie detection tool.

    Analysts have commented that often these types of tools, implemented for such causes as migrants, are tested out before being turned on the citizens of the countries that built them.

    They also note that the steps will possibly result in more deaths, as the migrants will turn to smugglers and other more dangerous methods of entering Europe.

     

Comments