Tech Friday

posted by Brian Thomas - 

  • AI is making it harder to tell fact from fiction:
    • Advances in computer power and machine learning have lead to technology that is making it hard to believe your own eyes and ears
    • New algorithms can take a single photo of someone and create a video that is completely fabricated and very, very realistic
    • Pinscreen is a Los Angeles start-up that has created the technology
    • They believe these renderings will become so realistic that it will be virtually impossible to determine what is real
    • Thao Li, a leading researcher on computer-generated video at USC, founded Pinscreen in 2015. "With further deep-learning advancements, especially on mobile devices, we'll be able to produce completely photoreal avatars in real time"
    • Videos known as "Deep Fakes" have surfaced where celebrities' faces have been carefully inserted into pornographic videos and popular movies 
    • FakeApp is one of several new AI-powered synthesizing tools that doesn't require specialized hardware or skilled experts to create convincing fake videos
    • Software such as FakeApp can be used for fraud, forgery, and propaganda. FakeApp has been downloaded more than 100,000 times and been used to create many fake pornographic videos featuring celebrities and politicians 
    • FakeApp is relatively easy to use, a user "trains" it with hundreds of photos of source and target faces. It relies on deep-learning algorithms to find patterns and similarities between the two faces
    • While the process isn't trival, you don't have to be a graphics or machine-learning expert to use FakeApp and it will run on relatively low-end systems
    • Nvidia has published a video showing AI algorithms generating photo-quality synthetic human faces. It may soon be capable of creating realistic-looking videos of non-existent "people" 
    • "Ten years ago, if you wanted to fake something, you could, but you had to go to a VFX studio or people who could do computer graphics and possibly spend millions of dollars," says Dr. Tom Haines, lecturer in machine learning at University of Bath. "However, you couldn't keep it a secret, because you'd have to involve many people in the process."
    • University of Washington researchers recently demonstrated a similar technique to move President Obama's mouth to match a fake script
    • There are many possible applications for this technology and many of them are malicious
    • Imagine the capability to use fake videos for blackmail, revenge or propaganda
    • This technology could have a devastating impact on the use of audio and video evidence in court cases
    • "This goes far beyond 'fake news' because you are dealing with a medium, video, that we traditionally put a tremendous amount of weight on and trust in," said David Ryan Polgar, a writer and self-described tech ethicist
    • Hany Farid, a digital forensics expert at Dartmouth College, said watching for blood flow in the face can sometimes determine whether footage is real. He also said slight imperfections at the pixel level may reveal fakes
    • Farid said that over time he expects artificial intelligence to be able to overcome these issues, making it very difficult to spot a fake
  • New laws are shining a needed light on data brokers: 
    • Security expert Bruce Schneier has said "Surveillance is the business model of the internet"
    • Data brokers, social media platforms, Internet Service Provides, cell carriers and app makers are aggressively collecting, aggregating and selling our information
    • In many cases, these companies are being hacked and sensitive Personally Identifiable Information (PII) has been leaked or stolen
    • Take the Equifax case for example. If your information was leaked, you probably can't fire them as you are most likely NOT their customer
    • Can you complain or leave the companies that sold your information to Equifax? Doubtful, because you most likely don't know who they are
    • At the moment, there is little incentive for the companies collecting you information to put more money and resources into protecting it because when it's stolen, the people whose information has been leaked suffer all the consequences
    • The General Data Protection Regulation (GDPR) in Europe and a new law in Vermont are starting to change this situation
    • Vermont's law will improve the security of Vermonters’ data, and some of the provisions of the law will help us all. The law requires data brokers with Vermonters’ data to register annually This will expose who is in the business as the large companies operate internationally. Registered companies will also be required to their opt-out options
    • Additionally, registered companies must disclose the number of security breaches and the number of individuals impacted each year
    • Vermont's law is fairly lax, a more stringent law could allow individuals to see exactly what information a broker has and allow individuals to correct and even delete data
    • Vermont's law is the first statewide law of its kind, and was passed despite strong industry opposition
    • In Washington, Representative Norma Smith introduced a similar bill in 2017 and 2018. It goes further requiring disclosure of what kinds of data the broker collects, but it has languished thus far
    • A 2018 California ballot initiative gives consumers the right to demand what information a data broker has collected about them. It will be voted on in November
    • The General Data Protection Regulation (GDPR) passed in 2016 and took effect in the EU May 25th of this year. It mandates that personal data can only be collected and saved for specific purposes and only with the explicit consent of the user
    • Thanks to GDPR, we’ll get more insight into who is collecting what and why
    • While this law only applies to EU citizens and people living in EU countries, the disclosure requirements will show both the companies that profit from our personal data and how
    • These new laws, and the ones that will hopefully follow are pulling back the curtains on this secretive industry and making it possible to make more informed choices  
  • Tesla on autopilot sped up before fatal crash:
  • The National Transportation Safety Board (NTSB) released a preliminary report of its investigation into the Tesla Model X wreck responsible for the death of Apple engineer Wei 'Walter' Huang
  • Huang was driving on US Highway 101 in Mountain View March 23rd when his vehicle crashed into a road barrier at 71 mph
  • Bystanders dragged Huang from the vehicle alive, but he died at the hospital from his injuries
  • The vehicle's 'traffic-aware' cruise control was set to 75mph and the Autopilot mode was operating continuously for the 18 minutes before the wreck
  • A driver can use the "traffic-aware" capability to set a speed as well as a fixed distance vehicles in front of the Tesla
  • NTSB found zero evidence that the Tesla's crash-avoidance systems engaged before the crash
  • 8 seconds before the crash, the Tesla was following a vehicle at the 65mph. 1 second later the Tesla began veering left
  • The Tesla stopped following the vehicle and began accelerating towards the barrier 4 seconds before the crash
  • NTSB found Huang touched the steering wheel on three occasions for a total of 34 seconds during the final 60 seconds before the crash, but not in the last 6 seconds
  • "At three seconds prior to the crash and up to the time of impact with the crash attenuator, the Tesla's speed increased from 62 to 70.8mph, with no precrash braking or evasive steering movement detected" said the NTSB report
  • "All aspects of the crash remain under investigation as the NTSB determines the probable cause, with the intent of issuing safety recommendations to prevent similar crashes" noted the NTSB
Brian Thomas

Brian Thomas

Based in Cincinnati, OH, the Brian Thomas Morning Show covers news and politics, both local and national, from a conservative point of view. Read more

title

Content Goes Here