Published
- 10 min read
Obiektyw: A browser extension for critical media analysis

In today’s digital landscape, where information flows at unprecedented speeds and truth seems increasingly subjective, I thought that it woul be nice to have something that would help to sift through content not worthy of my time. Hence a browser extension designed to help critically analyze online media content happened. Initially created for my personal use, I realized this tool could benefit others in navigating the complex world of online information.
From Walter Lippmann to Digital Media: The Evolution of Information Challenges
The initial idea emerged from reflecting on Walter Lippmann’s “Public Opinion” from 1922. Lippmann warned about how public perception is shaped not by direct experience but by the media-created “pictures in our heads”. He argued that our understanding of the world is always distorted by simplifications and biases.
A century later, Lippmann’s concerns have only intensified. What he couldn’t have anticipated was how digital platforms would accelerate and amplify these distortions. Today, misleading information doesn’t just spread - it propagates at lightning speed, reaching millions before fact-checkers can even begin their work.
This acceleration creates new vulnerabilities in how we process information. As Robert Cialdini demonstrated in his book “Influence” humans are prone to persuasion through specific triggers that bypass rational thought. In the online environment, these psychological vulnerabilities are systematically exploited. Too much information, too little time. So like Daniel Kahneman’s work on cognitive biases revealed, we rely on mental shortcuts rather than deliberate analysis when consuming information. It’s either that or cognitive overflow - exactly the behavior that digital platforms exploit and amplify with the help of engagement algorithms.
The Disinformation Security Challenge
Recent history provides troubling examples of how vulnerable our information ecosystem has become. Cambridge Analytica’s micro-targeting of voters with psychologically tailored content demonstrated how personal data could be weaponized to manipulate public opinion at scale.
More alarmingly, Russia’s hybrid warfare tactics have shown how disinformation can be wielded as a military weapon. By flooding media channels with conflicting narratives and exploiting emotional triggers to paralyze decision-making and undermine public trust in institutions.
These challenges have given rise to a new field: disinformation security. Similar to how cybersecurity protects digital infrastructure, disinformation security aims to protect cognitive infrastructure - the mental models and belief systems that inform our understanding of reality. This emerging discipline combines elements of media literacy, psychology, data science, and security practices to create safeguards against information manipulation.
But perhaps most concerning is the shift in media rhetoric from informative to polarizing. News outlets increasingly prioritize emotional engagement over factual reporting, creating content designed to trigger outrage rather than understanding. Headlines are crafted not to inform but to provoke, and nuance is sacrificed at the altar of virality. This trend is perfectly illustrated by Betteridge’s law of headlines, which observes that “Any headline that ends in a question mark can be answered by the word no.” Headlines like “Could This New Discovery Cure Cancer?” or “Is Your Smartphone Spying on You?” exploit curiosity and fear, even when the article itself ultimately cannot support such dramatic claims. Similarly, the rise of clickbait - those tantalizing, often misleading headlines designed solely to generate clicks (“You Won’t Believe What Happened Next!“) - represents the triumph of engagement metrics over journalistic integrity. These tactics systematically undermine the quality of public discourse by prioritizing emotional reaction over accurate information transfer.
Technical Development of Obiektyw: A Step-by-Step Process
The development of Obiektyw followed a structured approach that integrated browser extension architecture with AI capabilities. Here’s how I built it:
1. Setting Up the Extension Framework
I started with the basic extension structure following Chrome’s extension development guidelines:
// manifest.json
{
"manifest_version": 3,
"name": "Obiektyw",
"permissions": ["activeTab", "scripting", "storage"],
"host_permissions": ["https://api.anthropic.com/", "https://api.search.brave.com/"],
"action": { "default_popup": "popup.html" },
"background": { "service_worker": "background.js" },
"content_scripts": [{ "matches": ["<all_urls>"], "js": ["content.js"] }],
"icons": { "128": "128.png" }
}
This established the foundation for all extension components and their interactions.
2. Implementing the Analysis Trigger
Next, I created a pop-up interface intended for displaying analysis result in form of organised UI, with “Analyze” button as a starting point and event listener for intercepting that action:
document.getElementById('analyze-btn').addEventListener('click', async () => {
setAnalyzing(true) // show loading state
const [tab] = await chrome.tabs.query({ active: true, currentWindow: true })
// analysis flow...
})
The extension uses Chrome’s messaging API to coordinate communication between components.
3. Extracting Article Content
For content extraction, I implemented a content script that isolates the main article content:
function cleanTextContent(element) {
const clone = element.cloneNode(true)
// Remove unwanted elements
const selectorsToRemove = [
'script',
'style',
'noscript',
'iframe',
'nav',
'header',
'footer',
'.comments',
'.sidebar',
'.advertisement',
'.social-share',
'.related-articles'
]
selectorsToRemove.forEach((selector) => {
clone.querySelectorAll(selector).forEach((el) => el.remove())
})
let text = clone.textContent || ''
text = text.replace(/\s+/g, ' ').trim()
const paragraphs = text.split(/(?:\r?\n|\r){2,}/)
return paragraphs
.map((p) => p.trim())
.filter((p) => p.length > 0)
.join('\n\n')
}
This function provided a reasonable extraction of article text across various website structures. Filtering out unwanted items not only protects against their content affecting the analysis of the article but also reduces the number of input tokens to the LLM
4. Designing the Analysis Prompt
So-called analysis engine leverages Claude API capabilities to examine content across multiple dimensions.
I created a structured prompt that would produce consistent results:
const ANALYSIS_PROMPT = `
Please analyze this article for:
1. Summary: Provide a brief summary in Q&A format
2. Emotional language: Identify emotionally charged language
3. Subjectivity: Identify opinionated statements
// Additional analysis categories...
Format your response as a JSON object with the following structure:
{
"summary": [{ "question": "...", "answer": "..." }],
"linguisticAnalysis": { ... }
}
`
5. Implementing the Claude API Call
The background script handles communication with Claude:
async function analyzeWithClaude(text) {
const response = await fetch('https://api.anthropic.com/v1/messages', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-api-key': ANTHROPIC_API_KEY,
'anthropic-version': '2023-06-01'
},
body: JSON.stringify({
model: 'claude-3-5-sonnet-20241022',
messages: [{ role: 'user', content: ANALYSIS_PROMPT + text }]
})
})
const data = await response.json()
return JSON.parse(data.content[0].text)
}
6. Creating the Results Display
For the main analysis results, I used innerHTML for efficiency:
function displayResults(data) {
document.getElementById('analysis-content').classList.remove('hidden')
const keyQuestionsDiv = document.getElementById('key-questions')
keyQuestionsDiv.innerHTML = data.summary
.map(
(item) => `
<div class="item">
<h3 class="item-title">${item.question}</h3>
<p class="item-desc">${item.answer}</p>
</div>
`
)
.join('')
// other sections...
}
7. Implementing Data Persistence
I added storage functionality to save analysis results:
// Save analysis results
await chrome.storage.local.set({ lastAnalysisResult: data })
// Retrieve saced results
const lastAnalysisResult = await chrome.storage.local.get('lastAnalysisResult')
8. Integrating Brave Search API
To streamline comparison with other sources, I integrated the Brave Search API:
const BRAVE_URL = 'https://api.search.brave.com/res/v1/web/search'
async function searchWeb() {
try {
const searchQuery = await chrome.storage.local.get('lastAnalysisResult')
const lang = searchQuery.lastAnalysisResult.language.result
const query = searchQuery.lastAnalysisResult.search.result
const response = await fetch(
`${BRAVE_URL}?q=${encodeURIComponent(query)}&search_lang=${lang}`,
{
method: 'GET',
headers: {
'Content-Type': 'application/json',
'X-Subscription-Token': BRAVE_API_KEY,
'Accept-Encoding': 'gzip'
}
}
)
const json = await response.json()
const searchResults = json.web.results
const sources = []
const articleResults = searchResults.filter((result) => result.subtype === 'article')
articleResults.forEach((result) => {
sources.push({
title: result.title,
url: result.url,
source: result.profile.name
})
})
return sources
} catch (error) {
console.error('Custom web search error:', error)
throw error
}
}
//Handle search request
chrome.runtime.onMessage.addListener((request, sender, sendResponse) => {
if (request.action === 'search') {
searchRelatedArticles(request.text)
.then((results) => sendResponse({ success: true, data: results }))
.catch((error) => sendResponse({ success: false, error: error.message }))
return true
}
})
This integration allowed the extension to find relevant articles on the same topic, creating a foundation for the comparison feature.
9. Implementing Search Results
For the search feature, I deliberately used DOM manipulation instead of innerHTML:
function displaySearchResults(data) {
const container = document.getElementById('source-comparison')
container.textContent = ''
data.forEach((source) => {
const itemDiv = document.createElement('div')
itemDiv.className = 'item'
const flexDiv = document.createElement('div')
flexDiv.className = 'flex-between'
const contentDiv = document.createElement('div')
const title = document.createElement('h3')
title.className = 'item-title'
title.textContent = source.source
// Create clickable link element
const linkElement = document.createElement('a')
linkElement.href = source.url
linkElement.target = '_blank'
linkElement.textContent = 'View Source'
linkElement.className = 'source-link'
// Add event listener for click
linkElement.addEventListener('click', (e) => {
e.preventDefault()
chrome.windows.create({ url: source.url })
})
// Append all elements
contentDiv.appendChild(title)
contentDiv.appendChild(linkElement)
// other elements...
container.appendChild(itemDiv)
})
}
This approach was necessary because the comparison results needed to include clickable links that open in new windows. Using innerHTML would prevent event listeners from working properly with Chrome’s security model for extensions. The DOM manipulation approach allows direct attachment of click handlers that can call Chrome API functions like chrome.tabs.create(), which isn’t possible with innerHTML-created elements due to Content Security Policy restrictions in extensions.
10. Adding Localization Support
For localization, I created the standard Chrome extension localization structure:
/_locales
/en
messages.json
/pl
messages.json
In messages.json files, I defined translations:
// /_locales/en/messages.json
{
"analyzeArticle": {
"message": "Analyze Article"
},
"summary": {
"message": "Summary"
}
// Other translations...
}
Then implemented the translation function:
function translateUI() {
const elements = document.querySelectorAll('[data-i18n]')
elements.forEach((element) => {
const key = element.getAttribute('data-i18n')
const translation = chrome.i18n.getMessage(key)
if (translation) {
element.textContent = translation
}
})
}
what was left to add selectors to HTML:
<span data-i18n="analyzeArticle">Analyze Article</span>
and configure localization with the default language in manifest.json:
"default_locale": "en"
The development process was straightforward and methodical. Each component performed a specific function that contributed to the overall user experience. The project’s scope was deliberately limited to focus on providing useful media analysis with minimal friction, making it a compact but functional tool for critical reading.
Beyond Technology: The Value of Critical Perspective
While Obiektyw is fundamentally a wrapper around a language model, its value lies in accessibility and immediacy. By removing friction between normally required actions such as copying, pasting, and prompt structuring. It makes critical analysis available with a single click. This convenience encourages more frequent evaluation of content, helping users develop what I call “cognitive antibodies” against manipulation.
The ability to critically evaluate information has become as essential as literacy itself. Without this skill, we risk being influenced by those who understand the psychology of persuasion better than we understand it ourselves.
It’s important to note that the goal isn’t to advocate for purely factual content without perspective. Content that challenges our viewpoints and introduces new ways of thinking is essential for intellectual growth. Reading a summary of a philosophical work will never have the same impact as engaging with the original text and wrestling with its ideas.
Rather, Obiektyw helps identify when influence crosses into manipulation when rhetorical techniques are employed not to illuminate but to obscure. By highlighting patterns like:
- Emotionally charged language that bypasses rational thought
- Logical fallacies that create illusions of valid arguments
- Framing techniques that present partial truths while omitting crucial context
- Polarizing rhetoric that artificially divides complex issues into binary oppositions
The extension helps restore agency to readers in how they process information.
Looking Forward
This project is just a scratch over a very deep and complex topic. Its purpose is rather to be a conversation starter than a complete solution to a problem.
As manipulation techniques evolve, so too must our tools for detecting them. Possibilities of improvements in future iterations are endless, like:
- Automated source comparison with the use of LLM
- Ranking of web portals that frequently use manipulation techniques
- Collaborative databases of known manipulation patterns
- Educational components that explain why certain content raised concerns
- Machine learning models trained specifically on manipulation detection
And probably many more of which I can’t think of.
What I can think of is that in an information environment where attention is the primary currency, developing technical safeguards for cognitive autonomy becomes increasingly important.
The challenge of navigating today’s information landscape won’t be solved through censorship or platform moderation alone. Instead, it requires equipping individuals with the tools and skills to evaluate content with greater insightfulness and confidence.