AI use is certainly a controversial topic in 2025. There are strong proponents of generative AI, who believe it's going to transform the world, possibly leading to an AGI that could expand human intelligence or even subdue us. On the other side are those who think it's a pointless, energy wasting endeavour of the techbro sector and that the bubble is due to burst soon.
One thing seems clear, though: there has been some obvious progress between the first generation of "glorified autocorrects" —the early GPTs— and the current generation of DeepSeek R1, Gemini 2.5-Pro, and the like.
This progress isn't likely to vanish, even if the bubble bursts and all development halts. Whether we like it or not, AI tools exist, and even in their current state, they possess some genuinely useful capabilities.
A big problem with how AI tools are often used today stems from marketing hype. Users are led to expect AI can just do things for them: "Write this article", "Generate this image for me", "Make this app", and boom, magic happens. While that might be the reality of year 2030, it's definitely not the current state: trying to get an LLM to do all the work for you is usually a fast track to getting a passion-free, low-quality pile of slop.
However, this doesn't mean they aren't useful. For example, they can be great for pulling together lots of information quickly. If you have 20 minutes to absorb the key points from 10 research papers, feeding them to an LLM for a cohesive summary will likely result in learning more (and better) than trying to skim each article in under two minutes. This selective, strategic use of AI is where the real advantage lies.
This situation means people who know how to use AI well —leveraging what it actually does effectively, rather than just tossing prompts blindly— will have a competitive edge. In today's hyper-competitive capitalist society, that edge can't be dismissed.
However, this is not, and can't be, a justification for using AI whenever, wherever, whatever the cost. For instance, the issue of LLMs and image generation models being trained on copyrighted data remains largely unresolved. If the end product of your work is the graphic material for an advertising campaign, using image generators might not only be unethical but also result in customer backlash.
So here we are, at this crossroads where we need to use generative AI to remain relevant and avoid being pushed aside, while simultaneously needing to avoid indiscriminate use that undermines our values or lowers the quality of our work.
This means we need a principled approach to using AI: a framework that aligns with our own personal values which we can reference to decide when to use AI and when to avoid it.
Some principles to drive our use of AI 🤖
Each of these explores the implications of using AI only from a very narrow point of view, so that we can combine them according to our personal values and decide whether using AI for a particular case is worth it.
Energy Usage 💡
It's no secret that AI can be quite energy-hungry, because of what's involved in firing GPUs to do our work for us. Image generation models are particularly thirsty. Researchers have dug into this, for example this research paper on the topic is quite good if you're curious.
Roughly speaking, generating one image with AI will use around 3 Watt-hours (Wh) of energy. To put that in perspective, it's like using up to 20% of a full charge of an iPhone 16 or 16 Pro. If you could somehow do that computation directly on your phone, you'd only get about five images before the battery died. That's a scary use of energy.
That's just the energy for using the AI (the inference step). On top of this, we need to add the massive energy cost of training the model in the first place. Training these things requires huge data centres chugging power and water. The good thing is that this huge energy hit only happens once: the more times the model gets used, the smaller the slice of that training energy gets per individual use.
Take Stable Diffusion XL, for example. Training it is estimated to have taken about 150,000 GPU hours according to Civitai – roughly equivalent to 60 MWh of electricity. But it gets used a lot – the total number of image generations is between 690 million according to the official channels, and 12 billion according to the most generous estimates. Even using a conservative estimate of 690 million uses, the training energy adds less than 0.1 Wh per image. So, while the training energy is huge overall, the slice added to each individual image generation is actually pretty small compared to the energy used for the generation itself.
So, does this mean the AI image generation is always bad in terms of energy use? Not necessarily. If you genuinely need an image – say, for an article or some marketing material – the real question is how its energy use compares to the alternative, like firing up Photoshop.
If you're working on a reasonably efficient laptop like a modern MacBook Pro, you might be using around 15 Watts doing creative tasks. At that rate, working for about 12 minutes uses roughly the same energy (3 Wh) as one AI image generation.
The 12-minute equivalence gives us a handy rule of thumb. Think an image would take you an hour (60 minutes) in Photoshop? If you reckon you could get a usable AI version in fewer than five attempts (5 attempts * 12 minutes/attempt = 60 minutes), then the AI route might actually save energy.
We can do a similar calculation for text generation. Skipping some of the heavy math, a big model like GPT-4 might average around 0.5 Wh per query (including a tiny slice of its huge training cost plus the energy for the query itself). On that same MacBook, that's equivalent to about 2 minutes of your active work time.
Note: Numbers used: 1,750 MWh of training, serving 10 million daily queries for a yearly total of 3.65 billion queries; this is 0.48 Wh of usage for training, plus 0.05 Wh usage for inference from the study quoted above.
So, for text, the question becomes: could you achieve the same result yourself in less than 2 minutes? If yes, doing it manually is likely more energy-efficient. If no, then surprisingly, using the AI might actually save energy.
Consumption per prompt | Equivalent to MacBook usage | |
---|---|---|
Image Generation | 3 Wh | 12 minutes |
Text Generation | 0.5 Wh | 2 minutes |
Digital rights and ownership 🖼️
Next up is the messy, hotly debated topic of digital rights. ML models, especially for image generation, are trained on huge datasets scraped from the web, often without permission of the content creators.
Unfortunately the law hasn't quite caught up with the tech here, so it falls on us to be informed and figure out what feels ethical. There isn't an easy guideline, as the implications shift dramatically depending on what we use AI for. For instance, using image generation to mimic an artists' unique style feels ethically dubious, and too close to taking advantage or stealing their work. However, it's hard to imagine anyone getting upset about using AI to summarise your own notes.
What makes these cases so different? It often boils down to a few factors:
First, consider intent – what are you planning to do with what comes out of the AI model? If it's for commercial use, the crucial question is whether you're potentially harming someone. For example, selling prints in an artist's style might devalue their work and impact their livelihood. At the other extreme is the purely personal use, like creating a desktop wallpaper just for yourself or summarising your own notes. This is very unlikely to harm creators, unless you're directly using this to avoid paying for someone's services (like commissioning art you would have otherwise purchased).
Sitting somewhere between commercial and purely personal use is sharing AI-generated content publicly, but without commercial intent. Legally it would be quite hard to argue that you're resulting on harm or loss of business to someone, but it's still ethically tricky and can cause backlash in certain situations, especially if you don't disclose that the content was AI generated.
Beyond intent, another crucial layer is originality. If you use AI to design some generic buttons for an app, nobody is going to care. The same applies if you ask for a table summarising Sweden's population demographics. These examples rely on publicly available information or common design patterns, and require little creativity. However, if you asked an AI to generate an image in the style of a specific artist, or create a poem about Sweden in the style of a famous national poet, you're likely to get in trouble. So, generally, the more generic your request, the better reception you'll get.
A third factor is the degree of automation involved. Using AI to create buttons for an app isn't the same as getting an LLM to generate the whole application for you and proudly declaring yourself a vibecoder. Similarly with images: there's a world of difference between getting an AI to generate mundane parts of a comic, like a simple sky background, and trying to pass off a fully AI-generated comic strip as your own creation.
This is partly because art and creation are, to some extent, equated with effort. There's nothing inherently wrong with using a tools that reduce effort – photographers aren't looked down on by hyperrealistic painters. However a good photographer doesn't just press a button randomly and hope for the best; they invest effort and time to find the right moment, right settings, right composition.
Since AI is trained on existing data, all it can generate is by definition derivative. Using AI to create something is going to result in a derivative work. That's perfectly fine if all you want is a summary of your notes. But it you want to create an original blog post, using it extensively is only going to result in bland slop. You're better off using it for more precise tasks instead – to correct your typos, brainstorm ideas, or generate an initial outline, for example.
These three considerations – intent, originality, and degree of automation – point towards a core idea: don't use AI to poorly imitate human creativity or pass off automation as personal effort. Human input and effort bring unique value and perspective, regardless of the individual's skill level. When AI shifts from being part of the creative process to being a replacement for it, originality suffers, and the result risks being... well, slop.
Factor Considered | Ethical Guideline / Rule of Thumb |
---|---|
Intent of Use | • Commercial: High caution if potentially harming creators. • Personal: Generally low risk. • Public/Non-Commercial: Medium risk; requires transparency. |
Originality | • Mimicking Style: High caution needed. • Generic/Factual: Generally lower risk. |
Level of Automation | • AI as Replacement: High caution; often leads to lower quality. • AI as Assistant: Lower risk; maintains human oversight. |
Misinformation and Hallucinations 🧠
As of 2025, all LLMs are prone to hallucinations. This problem might get resolved in the future, but to use AI today we have to consider this limitation.
Not taking this into account is what has made many current implementations of AI an absolute disaster.
Apple Intelligence and Google Search are perhaps the best known cases. Users complain that Apple has been confidently mis-summarising the contents of messages and presenting wrong information to users. Google has been suggesting to users they eat rocks or put glue in pizza.
To top it off, there is no known solution. We have to accept that hallucinations are a limitation of the AI and we can't avoid them. This reality leaves us with two main practical strategies for using today's AI responsibly:
- Use AI on contexts where we don't care about hallucinations
- Use AI on contexts where hallucinations are easy to spot
This circles back to using AI as a tool that needs review – part of the creative process, not a replacement for it. When we use the tool to brainstorm (e.g. "provide ideas for fun holiday plans"), we don't really care about hallucinations. We are probably going to skim and see if any of those ideas is worth turning into a plan we're interested in. We will disregard hallucinations and bad ideas, so we don't really care about them.
Same could be said about images. If you plan to use an image generation model to quickly mock up 20 variants of a logo and understand what can work in terms of colour, style, etc, you don't care about the logos having malformed text or artefacts - you only want to get the general idea anyway.
The second option is when hallucinations are easy to spot. For example, if I want to summarise my long notes on a topic to send them to someone else, I do care about hallucinations. So I am going to read through the summary, and, since it's my own notes in the first place, I can quickly notice if anything doesn't match what I would have written in the first place. Depending on the size, this might still let me save significant time, as reviewing is usually far faster than writing or re-writing.
What is obviously not a good idea is to use it for things where you can't identify that the information is accurate, such as summaries of news, text messages, or search queries for topics that you don't know about. If you have to do that, use a tool that allows you to see the sources for the information (preferably inline) so that you can click and verify anything that feels interesting or out of place. Otherwise, you risk trusting what is essentially a sophisticated text generator. Remember: LLMs excel at sounding plausible, but accuracy isn't their core objective.
Explainability 💬
Another limitation to add to the list is the lack of explainability of AI models, which are often called "black boxes" for a reason. Peeking inside to see how they reached a conclusion is incredibly difficult – unlike traditional programming, they're made up from multiple layers of mathematical operations (think matrices), and the steps in between usually don't make sense to humans.
Again, the key to using them successfully is knowing the limits and working with them, not fighting against them. Just like with hallucinations, the practical approach is using AI only for use cases where we don't care about this limitation – where we don't need to know the "how" behind the "what".
Fortunately, for many creative uses, explainability isn't a big deal. If I ask an LLM to brainstorm ideas or fix my typos, I don't need to know how the AI got to that conclusion. When summarising I care about whether the summary is accurate, not the precise method used to generate it.
Where this "black box" nature becomes really problematic is in automated decision-making. Take reviewing CVs, for example. If an AI-based system is set up to accept or reject candidates automatically, it will be making its calls based on internal calculations we can't check – and yes, those calculations might include hallucinations or hidden biases. That's a terrible use case of the tool, potentially hallucinating someone right out of a job without a clear reason.
A much better application in that scenario, even if it saves a bit less time upfront, looks more like this:
- The hiring manager writes the job description.
- An LLM tool scans the submitted CVs, highlighting potential matches – maybe even spotting connections a human might miss (like recognising that GIMP experience is relevant for a role that asks for Photoshop skills).
- The LLM annotates the CVs with these findings, adding reasons why a candidate might be a good fit.
- The human hiring manager then reviews the CVs, armed with these AI-generated annotations as helpful context, but ultimately making the decision themselves.
This keeps the human firmly in charge of the decision making process while still leveraging the AI's ability to quickly process text and spot patterns. Especially in technical roles, CVs can be dense with jargon (specific programming languages, chip architectures for Embedded Systems, obscure engineering software). Expecting HR teams to be experts in every niche is unrealistic. An AI assistant here can act like a specialised research assistant.
Now, some might think, "But I can just ask the LLM why it made a decision!" We need to be careful here. When you ask an LLM for an explanation, it will generate text that looks like an explanation. However, this generated text doesn't necessarily reflect the model's actual internal process (which, remember, isn't really accessible or human-readable). It's just giving you the most plausible-sounding justification it can construct.
Long story short: As of today, you can't truly get inside the AI's "head" to understand its reasoning. So, the principle is clear: avoid using these black-box models in situations where understanding the decision making process is critical or legally required. Otherwise you risk ethical, legal consequences, or even worse, putting lives at risk (e.g. in the medical field).
Crafting our AI principles 📝
AI is a powerful, controversial tool with its strengths and limitations. For most people, ignoring it isn't really an option, but we cannot dive in blindly. It's important to understand the limitations, the ethical problems, and ensure we use it responsibly. Using AI is dead easy; using it well, understanding all the implications, is trickier.
To use AI effectively, we need to fit it around our principles, to make sure we're not accidentally doing something that we would consider unethical without even realising. Some important pillars are:
- Being mindful of energy consumption, as sometimes it can use vast amounts of energy but some other times it can actually save energy.
- Being careful with intellectual property and digital rights, so that we don't end up stealing other people's work.
- Accounting for hallucinations and misinformation, and using the AI in a way that can catch hallucinations without inserting them into the final result.
- Avoiding the use of AI for critical decisions, to avoid critical issues caused by the lack of explainability.
This isn't an exhaustive list – we all could list hundreds of principles that matter to us. Each person, each use, each level of tolerance to risk will require a different set of rules and principles. This isn't a set rulebook but the starting point for a personal framework that we can apply to our use of AI.
Whether we like it or not, AI is here to stay. It doesn't come without dangers, but with skill, awareness, and the responsibility that comes with information, we can use it without feeling guilty about it – and keep evolving our ways of working with it as we learn and the tools improve.