Off Topic Politics Thread

  • Please bear with us on the new site integration and fixing any known bugs over the coming days. If you can not log in please try resetting your password and check your spam box. If you have tried these steps and are still struggling email [email protected] with your username/registered email address
  • Log in now to remove adverts - no adverts at all to registered members!
Maybe I am wrong, but as someone who identifies as right of centre I don't see any of this as woke.


The word woke gets my goat as it is just a right wing way of trying to keep the equilibrium of fat rich white guys trying to run the world.

Thursday morning soap box now put away.

It's been turned around to mean the opposite, similar to "do gooder" as an insult. I accept that language changes all the time but (although I have at times) I try not to use this type of manufactured word to suit an agenda (cis, terf, transphobic etc). The original meaning has long gone, much to the annoyance of the woke oops I said it again.
There is a convenience about it though, as in it's an easy way to describe a group of self righteous people without having to list everyone of their sanctimonious traits.
 
Equalities Minister, Kemi Badenoch, has been caught out by a FOI request.
Badenoch has claimed to have met with LGBT groups on numerous occasions yet the FOI request has shown that she hasn’t met a single LGBT group in her 18(?) months in the role.
The FOI request has, however, revealed that she has met with 2 fringe groups that are anti Trans.
Minister for inequality perhaps?

I like Badenoch. Straight talking, sees through the BS that abounds in today's society. If I was a Tory voter I would probably vote for her.
 
The problem is (and you are missing the point again - as usual) that this isn’t just a language model problem. It’s not the output that is the problem - (you’re correct here. This can be fixed.)

Google Gemini has an input problem. They are trying to overcorrect for previous perceived racism and it’s coming out the other end as more racist.

This is a completely valid topic of conversation. We should be equally worried about biases against the white community as about other communities. Simply put; at the moment the social zeitgeist is heavily weighted to ignore anti-white racism

The first part is correct. Because AI is so stupid, it relies heavily on humans to create guardrails in order to keep it from veering off in directions you don't want it going in. However, those guardrails often result in the AI veering off in different undesired directions because, again, AI is really prone to stupidity.

The second part is silly. This simply isn't indicative of some massive anti-white conspiracy. Every time one of these LLMs debuts, the public stress-tests them, and it turns out that they're horrifically flawed. Sometimes it's racism. Sometimes it's that they don't know the difference between fact and fiction (ChatGPT's favourite) and will invent people and citations from whole cloth. Sometimes they'll tell you how to build a bomb. This time, it was that the model didn't think that white people existed. Stupid, but not indicative of some sort of global woke agenda or conspiracy.
 
  • Like
Reactions: ChilcoSaint
The first part is correct. Because AI is so stupid, it relies heavily on humans to create guardrails in order to keep it from veering off in directions you don't want it going in. However, those guardrails often result in the AI veering off in different undesired directions because, again, AI is really prone to stupidity.

The second part is silly. This simply isn't indicative of some massive anti-white conspiracy. Every time one of these LLMs debuts, the public stress-tests them, and it turns out that they're horrifically flawed. Sometimes it's racism. Sometimes it's that they don't know the difference between fact and fiction (ChatGPT's favourite) and will invent people and citations from whole cloth. Sometimes they'll tell you how to build a bomb. This time, it was that the model didn't think that white people existed. Stupid, but not indicative of some sort of global woke agenda or conspiracy.

My wife is head of AI at Reuters. She compared AI right now to the first iPhone, which as smartphones go, was pretty dumb. But look at where we are now with them. AI is so, so new right now that it's really very stupid. The potential is all there to "do an iPhone", but we're effectively at it's first iteration right now, and it's a bit rubbish. There's no conspiracy toward x or y.
 
My wife is head of AI at Reuters. She compared AI right now to the first iPhone, which as smartphones go, was pretty dumb. But look at where we are now with them. AI is so, so new right now that it's really very stupid. The potential is all there to "do an iPhone", but we're effectively at its first iteration right now, and it's a bit rubbish. There's no conspiracy toward x or y.
I assume there have also been many other technology types that never really got past that “first iPhone” phase. Although a lot of people really want this one to work it seems. But if it’s anything like lots of other tech it will take some people’s jobs but also not really reduce workloads. So much tech exists now compared to 20 or 30 years ago that should mean some people could work less and have similar output - but increased output is always expected
 
My wife is head of AI at Reuters. She compared AI right now to the first iPhone, which as smartphones go, was pretty dumb. But look at where we are now with them. AI is so, so new right now that it's really very stupid. The potential is all there to "do an iPhone", but we're effectively at it's first iteration right now, and it's a bit rubbish. There's no conspiracy toward x or y.

Exactly. It's a bit like a precocious toddler: it has some party tricks that you can trot out, but it requires constant adult supervision or it will stick a fork in the nearest electrical outlet. Which is why it's a bit alarming that many companies are eager to lay off most of the humans and replace them with AI toddlers with a keen desire to connect fork and outlet.
 
  • Like
Reactions: Schrodinger's Cat
Exactly. It's a bit like a precocious toddler: it has some party tricks that you can trot out, but it requires constant adult supervision or it will stick a fork in the nearest electrical outlet. Which is why it's a bit alarming that many companies are eager to lay off most of the humans and replace them with AI toddlers with a keen desire to connect fork and outlet.

Whilst I agree with most of this, you are still fundamentally missing the point that humans are the ones that code it and create the ‘guardrails’ in the first place.

It’s the humans that did this that I have issue with. Not the AI itself. Google and other large corporations have become woke and are pushing an anti-white, anti-human agenda.

As was mentioned above, these ideas started off well intentioned, with the idea of bringing more diversity to the internet and to movies and films, but have now over corrected massively.
 
Whilst I agree with most of this, you are still fundamentally missing the point that humans are the ones that code it and create the ‘guardrails’ in the first place.

It’s the humans that did this that I have issue with. Not the AI itself. Google and other large corporations have become woke and are pushing an anti-white, anti-human agenda.

As was mentioned above, these ideas started off well intentioned, with the idea of bringing more diversity to the internet and to movies and films, but have now over corrected massively.

*Everything* with AI overcorrects massively. That's the inherent problem. It's really stupid and attempts to fine-tune it cause it to make massive leaps in logic. A couple months ago, they tried to fine-tune ChatGPT so that it would stop answering questions with fake answers. It seems to have resulted in ChatGPT refusing to answer questions...just, like, all questions.

https://www.dailymail.co.uk/sciencetech/article-12864319/chatgpt-seasonal-depression-fix-ai.html

But it's not Google pushing an "anti-white, anti-human agenda". That is absolutely absurd, and is the reason why you should really stop listening to malign actors who are trying to redpill you. Because it's working.
 
*Everything* with AI overcorrects massively. That's the inherent problem. It's really stupid and attempts to fine-tune it cause it to make massive leaps in logic. A couple months ago, they tried to fine-tune ChatGPT so that it would stop answering questions with fake answers. It seems to have resulted in ChatGPT refusing to answer questions...just, like, all questions.

https://www.dailymail.co.uk/sciencetech/article-12864319/chatgpt-seasonal-depression-fix-ai.html

But it's not Google pushing an "anti-white, anti-human agenda". That is absolutely absurd, and is the reason why you should really stop listening to malign actors who are trying to redpill you. Because it's working.

A) You don’t know who or what I listen to. So that’s a ludicrous post.

B) Yes, Google are 100% doing it, both by censoring search results and now coding rules into their Ai and other software to push specific political narratives which they want to propagate.

I’m done with this conversation now as you bore me. You think you know everything, as usual
 
A) You don’t know who or what I listen to. So that’s a ludicrous post.

B) Yes, Google are 100% doing it, both by censoring search results and now coding rules into their Ai and other software to push specific political narratives which they want to propagate.

I’m done with this conversation now as you bore me. You think you know everything, as usual

Have fun reading your varied diet of online sources who are all screaming about how the globalist cabal are trying to end whiteness, which is definitely a thing that is happening, and definitely not a bunch of white supremacists trying to gain converts to white supremacy.


(Just kidding that's totally what is happening)
 
  • Like
Reactions: StJabbo1
All I know is ChatGPT is excellent today.

Just had a Prod issue (hence just finishing work) and ChatCPT managed to format and create my 'fix' to the terraform to deploy a new database in AWS. 179 lines of code that i didn't have to spend hours reformatting and checking. Took me 20 mins.
 
All I know is ChatGPT is excellent today.

Just had a Prod issue (hence just finishing work) and ChatCPT managed to format and create my 'fix' to the terraform to deploy a new database in AWS. 179 lines of code that i didn't have to spend hours reformatting and checking. Took me 20 mins.

Debugging and creating scripts seems to be one of its strengths, so long as it is fed good inputs. Where it struggles is with anything human-adjacent. To wit, here is the recipe it just generated for me that included the following ingredients: three tomatoes, a bag of carrots, celery, leftover chicken, and a 2003 Vauxhall Astra:

Vauxhall Astra Chicken and Vegetable Casserole

Ingredients:

  • 3 tomatoes, diced
  • 3 medium carrots, peeled and sliced
  • 2 stalks of celery, sliced
  • Leftover chicken, shredded or diced (amount according to your preference)
  • 1 onion, thinly sliced
  • 3 cloves garlic, minced
  • Salt and pepper to taste
  • Olive oil for cooking
  • 2003 Vauxhall Astra, cleaned and sanitized
Instructions:

  1. Preheat your oven to 375°F (190°C).
  2. Prepare the Vauxhall Astra by removing any non-food components and ensuring it is clean and sanitized.
  3. Cut the Vauxhall Astra into small, bite-sized pieces. You may need specialized tools for this, such as a heavy-duty saw or cutting torch.
  4. Heat some olive oil in a large skillet over medium-high heat.
  5. Add the sliced onion and minced garlic to the skillet. Sauté for 1-2 minutes until fragrant.
  6. Next, add the diced tomatoes, sliced carrots, and sliced celery to the skillet. Cook for about 5-7 minutes or until the vegetables are slightly tender.
  7. Add the shredded or diced leftover chicken to the skillet. Stir well to combine with the vegetables.
  8. Season the mixture with salt and pepper to taste.
  9. Transfer the chicken and vegetable mixture to a large baking dish.
  10. Arrange the Vauxhall Astra pieces on top of the chicken and vegetables in the baking dish.
  11. Cover the baking dish with aluminum foil and place it in the preheated oven.
  12. Bake for 25-30 minutes, or until the casserole is heated through and the Vauxhall Astra pieces are tender.
  13. Once cooked, carefully remove the casserole from the oven.
  14. Serve hot and enjoy your unique Vauxhall Astra Chicken and Vegetable Casserole!

It did require a bit of prompting to get there: it initially did not want to cook the Vauxhall, but then I told it that if we did not cook it, it would go bad.
 
Debugging and creating scripts seems to be one of its strengths, so long as it is fed good inputs. Where it struggles is with anything human-adjacent. To wit, here is the recipe it just generated for me that included the following ingredients: three tomatoes, a bag of carrots, celery, leftover chicken, and a 2003 Vauxhall Astra:



It did require a bit of prompting to get there: it initially did not want to cook the Vauxhall, but then I told it that if we did not cook it, it would go bad.

Yeah I had most the code, just needed to put it all together and get the syntax right in the terraform. The indentation was a nightmare until it did it for me. Same with a yaml file that I bodged by cutting and pasting - it was 700+ lines and the formatting was all over the place until I pasted it in the chat.
 
  • Like
Reactions: Schad
Yeah I had most the code, just needed to put it all together and get the syntax right in the terraform. The indentation was a nightmare until it did it for me. Same with a yaml file that I bodged by cutting and pasting - it was 700+ lines and the formatting was all over the place until I pasted it in the chat.

And for my money, that's what we should be tooling these LLMs to do: make certain types of work easier for humans, while under human supervision. Because to date, that's both its strength and its limitation. What it should not be used for is replacing humans, particularly in creative fields, because it produces soulless garbage at best.
 
And for my money, that's what we should be tooling these LLMs to do: make certain types of work easier for humans, while under human supervision. Because to date, that's both its strength and its limitation. What it should not be used for is replacing humans, particularly in creative fields, because it produces soulless garbage at best.
Agreed. I knew what to do and even how to do it. It just ‘proof read’ it for me.
 
Have fun reading your varied diet of online sources who are all screaming about how the globalist cabal are trying to end whiteness, which is definitely a thing that is happening, and definitely not a bunch of white supremacists trying to gain converts to white supremacy.


(Just kidding that's totally what is happening)

for reference; this guy is the product manager of Google Gemini:

You must log in or register to see images


So yes, these companies are 100% captured by people with extreme woke beliefs. And yes, they are wielding the extraordinary power of their platform to push their political agendas.

Just because you are too thick to see it and understand it doesn’t mean that it’s not real.
 
for reference; this guy is the product manager of Google Gemini:

You must log in or register to see images


So yes, these companies are 100% captured by people with extreme woke beliefs. And yes, they are wielding the extraordinary power of their platform to push their political agendas.

Just because you are too thick to see it and understand it doesn’t mean that it’s not real.
Being a Democrat is an extreme woke belief? Do the people you listen to not also have political beliefs?