Comparing a privacy properties of Google Bard and ChatGPT

I just got beta access to Google Bard. As many of you know, there is quite a bit of question around the privacy properties of ChatGPT, symbolically represented by the investigation by the Italian Privacy Protection Authority “Garante”. One of the questions is its treatment of the information about a living person, especially misrepresenting it. It poses questions about legality based on GDPR and other privacy regulations worldwide.

So, I went on to test that in ChatGPT and Google Bard.

To test, I asked for information on two living individuals: one relatively obscure but still searchable and documents being available in the internet and one prominent. That is, me and Elon Musk.

Here is the response from ChatGPT.

(Source) ChatGPT 3.5 response as of 2023-04-18. Note: Response based on GPT4.0 was not much better.

It replies. And it actually is full of inaccurate information about me. Since I did not want to disseminate such inaccurate information, I decided to show it as an image instead of text although it is undesirable from the accessibility point of view. Replying such “fake” information about a person may constitute a serious privacy infringement and should be taken seriously.

In contrast, Google Bard replied like this:

(Source) Google Bard as of 2023-04-18.

While it may be much less interesting for people, this probably is how it should behave1. It should not make up an answer for a living person. A clear WIN for the Google Bard in this round.

Footnotes

  1. Having said that, Perplexity.ai, which combines OpenAI’s GPT-3 and search result to form referenceable responses, seem to provide a correct answer to the first question and can be quite useful. It remains to be seen how such technologies develop.

2 Replies to “Comparing a privacy properties of Google Bard and ChatGPT”

  1. I think you are confusing two different areas of law: privacy and libel. This is important in that libel is already well established, but will likely need to be updated separately.
    Another problem is responsibility. Who is libel? The source of the data or the aggregator oj that data, or the website that displays that data.

    1. I am not talking about libel here but the accuracy principle. I actually doubt the above response from ChatGPT would form a libel, while it certainly violates the accuracy principle.

Leave a Reply to Tom Jones Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.