Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

atomone-hub/future-security-agency #1

Open
DigimosNomad opened this issue Jan 30, 2025 · 0 comments
Open

atomone-hub/future-security-agency #1

DigimosNomad opened this issue Jan 30, 2025 · 0 comments

Comments

@DigimosNomad
Copy link

DigimosNomad commented Jan 30, 2025

Good points. I would also add that the most widely used AI application is OpenAI's ChatGPT, whose source code is closed source and company is for-profit. The source code should always be open-source, and as already explained, developers should have better tools to interpret it.
Despite privacy and data protection laws, no one knows for sure how the data stored on the servers is handled or whether it is shared further. This creates an opportunity for service providers to act against the privacy laws and maliciously and generate social credit profiles of users, which could be used against individuals or even for blackmail if their identity is doxxed. With the press of a button, a complete profile of all your conversations with AI could be generated in seconds and stored in databases controlled by the service provider.
There is already evidence of how user data is collected and surveilled by service providers on the internet, as well as sold to third parties.
Another problem arises when AI-generated content is taken at too openly without questioning its factual accuracy. This leads to people forming incorrect beliefs that they consider indisputable truths. Additionally, as already mentioned, AI can produce biased content for the users.
There are already studies showing that AI has begun to reduce human intelligence because the brain no longer challenged in the same way as before when answers can be summarized at one click. This is particularly concerning in schools, where AI is already widely allowed in learning and brains are still developing.
Another major issue is scams and identity fraud, deepfakes, and the "dead internet" phenomenon, where AI bots that mimic humans flood the internet.
In summary, AI-related risks such as privacy and data security issues, disinformation and misinformation, biased content production, surveillance and social credit scoring, scams and identity theft, deepfakes, and the "dead internet" are just a few concerns. Governments and their institutions should take significant responsibility in discussing and educating the public about these dangers more and more widely. Currently, AI investment is rapidly increasing, with more money and resources being funneled into its development, primarily focusing on its pros when the focus should instead be on its cons with security concerns as priority.

International organizations that are established to develop and oversee a specific realm often tend to evolve into driving conflicts of interest, rather than remaining neutral and sincere. The whole world ends up listening to them without prejudice, considering their actions as a common vision to be pursued in decision-making tables. And surely, such an international organization is either on the way or already established. The worst part is allowing private funding, which gives external powers a seat and influence in the governance, granting for private individuals and corporations a say, something that should be prohibited from the start. We should already be thinking about how impartial anti-corruption units should monitor activities per funding country, the turnover of personnel within the organization, the length of their terms, and other factors, rather than addressing these concerns once such an organization is already in operation nationally.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant