Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> unacceptable risk applications — the focus of this month’s compliance requirements — will be prohibited entirely. Some of the unacceptable activities include:

what follows is a list of some pretty nasty and insidious use cases.

it’s not “AI is completely banned”, it’s “consider the use cases you are working on responsibly”. only for those specific use cases, mind you.

for all other use cases not in the list, which is a significantly larger subset of development, just ensure you do the required safety/regulatory sign off work.

just like when we get our SaaS webapps evaluated for compliance to security standards, its just a box ticking exercise for the most part.



> AI that tries to infer people’s emotions at work or school.

When I talk to ChatGPT advanced voice mode with a happy and upbeat tone, it replies similarly. If I talk to it in a lot neutral way it does adapt. The AI thus infers my emotions. I use ChatGPT at work, my company pays for it.

Sounds like I should sue.

Also, I am trying to implement a new policy for pull request in my tech team. We send an anonymous form to gather feedback. I sent all the responses in one block to ChatGPT and asked it to summarize the feedback. The AI indicated that “generally people seem pretty happy about the new policy”. Should I go to jail now for being clearly a deranged madmen according to the EU ?


from the actual act, which is linked in the article

> the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions

emphasis mine.

chatgpt was not specifically put into service to monitor your emotions at work.

so it’s fine i’d say. and your pull request thing is fine.

also, you’re not trying to infer the emotions of any specific natural person. you’re trying to guage satisfaction of a process. that’s different to working out whether someone is feeling sad or feeling lonely in the workplace because they “aren’t smiling enough”.

unfortunately that’s means you can’t sue and get a pay day.

edit — i find it kind of funny that people are knee jerk reacting emotionally to this. kind of ironic when you consider the example at hand.


And its kind of funny when non-legal experts attempt to say something is fine or not.

It depends highly on not only how its written but the spirit of what the EU is attempting to do. The knee jerk reaction is probably that historically institutions do a terrible job of writing rules and especially rules around new technology.


you are right to call me out. IANAL. on me.


And not entirely trying to call you out but the devil is in the details. I don't necessarily disagree with the ideas but implementations often fall short and you end up with lots of litigation in court to figure out the spirit of the rule. My main complain here is that how much of the focus is on AI implementations? What is AI? Should the rules not be crafted more on the protection of certain classes and ideas and not on the implementation of AI?


An actual lawyer will probably agree with you. And also charge me to give me the same opinion. They will also say “but if you end up being sued, it’s not my fault”.

So basically I just need to second guess everything I do, until someone somewhere gets sued and loses and another dude gets sued and loses. At that point we will have some idea about what the law really entails (at which point they change it and the cycle restarts)

In the meantime, my US competitors are just moving full steam ahead.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: