Tagged: Ppolitical discourse at Meta
-
Political Discourse by Software Engineer at Meta
Posted by Danny Vesokie | Affiliated Financial Partners on October 18, 2024 at 1:22 am“Usually the disinformation we saw was pro-Trump… We’ll investigate, and then it goes just up to another team to take it [the post] down,” admits Matthew Fowler, a Software Engineering Manager at Meta, during an undercover date with an O’Keefe Media Group journalist. Fowler emphasized that Meta’s investigations often rely on mainstream media for verification, stating, “The news is going to do their job, and then based on what the outlets say… you have to trust that.”
Plamen Dzhelepov, a Machine Learning Engineer at Meta, agreed, “Meta has the right to suppress anything,” acknowledging the company’s political bias by noting, “they’re biased if they do that against the Republicans.” Dzhelepov detailed Meta’s censorship capabilities, confirming they target “crazy conspiracy right-wing people,” actively demoting their opinions.
Michael Zoorob, a Data Scientist at Meta, described the fact-checking process, stating, “the fundamental problem is that… the vast majority of what fact-checkers label as false is shared by conservatives,” admitting that any policy demoting content labeled false by fact-checkers disproportionately affects conservatives.
When addressing misinformation, Dzhelepov explained Meta’s approach: “it’s either censored or it says a [community note] thing, where it says, ‘this is disinformation, it’s probably fake’ under the post,” adding that Meta “actually suppresses certain voices.” Zoorob also recalled how Meta had downranked the Hunter Biden laptop story upon the FBI’s warning, stating, “We just downranked it,” making the post less visible in users’ feeds.”
Susan replied 1 month ago 3 Members · 2 Replies -
2 Replies
-
In the context of professional activities in the software engineering sphere at Meta, it is reasonable to suggest that the topic will cover several important issues relevant to the arms of technology, the social media industry, and the world of politics.
To make it clearer, these are the most common topics of political agitations a software engineer at Meta will encounter or seek to explore during their scope of work:
The role of social media in the politics of today
Increased Reach to Audiences: People sharing their political opinions using Meta’s Facebook, Instagram, or WhatsApp (among many other social media platforms) have created a community of thousands who can showcase and discuss their thoughts on politics. These same tools also increase the likelihood of sharing misleading information or even extreme controversies.
Political Fragmentation: A software engineer could inquire about how the algorithms make choices (including those tailored to engagement optimization), which may work to intensify the fragmentation of political parties and opinions. For example, after subscribing to (or “liking”) certain pages or posts, users are later suggested similar posts that fan more of the same ideology.
Addressing the dilemma of freedom of expression and encroachment, or overreach of content control, is one of the greatest challenges Meta faces in the world today. Meta has engineers who design systems and frameworks that deal with hate, grief, and violence by eradicating such content from the Internet without invading free political speech.
Algorithmic Bias
Filter Bubbles: When algorithms that filter content based on what the user has liked or interacted with are utilized, this can create an isolationist environment where people do not have access to alternative views. Such bias in content sharing apprehensively alters the political landscape. Hence, they have a lesser chance of encountering alternate political ideologies.
Ethical Algorithms: Therefore, a possible solution is providing engineers with the tools they need to create fair, unbiased, and ethical algorithms to begin with Frameworks that will combat because allowing social media companies to use their algorithms without supervision will lead to politically biased content or misinformation.
Artificial Intelligence and Political Content: Misinformation is rampant on social media, and Artificial intelligence algorithms escalate it. Such models can be constructed but will always be subject to exposure from various sources. Machine learning engineers must always adapt and change content policing algorithms for various political speeches without abusing free speech.
Misinformation and Fake News
Challenges in Detection: Once more, engineers have the tech-oriented problem of developing systems or tools capable of identifying and filtering fake news. Even with scandals like deepfakes gaining in popularity and other forms of advanced fake news, it becomes harder to check out the truth from the rubble on a system level.
Combatting Misinformation: That intermediate solution incorporates a range of activities, such as fact-checking partnerships with mass media, improvement of AI solutions, and flagging false information by users. However, it remains an uphill task since there is a constant fight between those who generate disinformation and those who try to generate countertactics.
Data Privacy and Political Campaigning
Use of User Data in Politics: For instance, we can remember that in past elections, there were scandals over how campaigns used user data for targeting purposes, such as the Cambridge Analytica scandal. Meta’s engineers’ work made this kind of targeting possible. Such concerns should urge metaengineers to find ways to ensure that the ethical use of users’ data is achieved without disregarding users’ privacy.
Transparency in Political Ads: Meta has implemented some features for political ad transparency in response to these events. However, engineers still have difficulty designing systems that track and provide reports on political ads’ expenditures and targeting, which is essential in ensuring trust.
Global Impact and Regulation
Meta Legal Policies and Local Laws: It is clear that Meta is also a global leader. Therefore, there is also a need for some users or other persons, or even the media, not to violate the laws governing the country hosting them; hence, such users also have to be sensitive and respectful while using the platform. Engineers must also design systems that cater to different moderations of contents and political situations in different areas while avoiding inconsistencies in the platform’s development.
Engineers must also ensure that legal measures concerning political content and advertising, covering aspects such as the GDPR of Europe or other changes within the US legal context, have been considered for Meta.
Ethics of Designing Political Influence Tools
Impact on Elections: Meta platforms have been exploited for good and bad political campaigns during elections. While some of the engineer’s colleagues, for instance, think of the ethics behind designing interfaces that can change people’s opinions or alter election outcomes, others remain indifferent.
Defending Democratic Values: The engineer’s responsibilities may also include ensuring that the platform is not abused for malicious attacks by foreign states or fake news during elections.
Employee Activism and Brand Accountability
International Debates: Software engineers increasingly believe that their employer, Meta, should be engaged in the politics of the countries where they operate. There has been an internal dispute regarding moderation of the platforms, which political ads are allowed, and how Meta treats disinformation on its platforms, escalating Corporate Goals and Social Impact.
Engineers may face ethical dilemmas regarding balancing Meta’s business objectives and the consequences for society. How far does the company’s jurisdiction (and employees) extend over political results affiliated with its platform?
A software engineer at Meta, regardless of whether they are dealing directly with political discourse-related products, still belongs to an organization that hugely impacts political interactions across the globe. Such a role has to cope with desires to innovate, grow a platform, and sustain the impacts of the products they are building in the context of today’s social media climate thick with politics.
-
There are issues concerning Meta’s (previously Facebook) content moderation policies, political bias, and transparency that are raised by the interactions you provided. The statements made by Meta employees in the context of this undercover interaction suggest that some degree of political bias may exist in the way certain types of disinformation and fact-checking are practiced on the platform. Let us parse the key concerns raised in the paper:
Political Bias in Content Moderation
The assertions by Matthew Fowler and Plamen Dzhelepov provide a basis for the belief that content moderation targeted by Meta has certain political implications. Dzhelepov confessed that Meta intentionally targets “insane conspiracy right-wing people” and suppresses their opinions. This is in line with wider anecdotes suggesting that there tends to be an overrepresentation in the type of censorship directed at conservative content in most social networks.
Michael Zoorob’s remark about fact-checkers flagging conservative content more often than not as false points to an imbalance in the manner in which misinformation is dealt with. This imbalance is likely to impact more conservative voices indirectly through content moderation policies. The issue of imbalance raises the question of whether, indeed, any of the fact-checking processes undertaken are free of political bias.
Trusting the Mainstream Media as a Source of Factual Information
One of the issues affecting genuine credibility and independence concerns Fowler’s assertion that investigations on disinformation by Meta are too dependent on the mainstream media. Mainstream media can be biased; therefore, if all content moderation decisions at Meta depend on what the mainstream media reported, then the same biases will be reflected in its content.
The fact that disinformation has to be verified using mainstream media makes one question whether platforms such as Meta ought to be cautious in endorsing conclusions of external media without a thorough investigation of their own. It also raises doubts about the reliability of certain sources, sometimes referred to as outlets.
Downranking and Suppressing Content
Meta was performed using Downrank, which covered this detail about Dzhelepov and Zoorob, focusing mostly on functions such as downgrading content. These included making the post less visible in the followers’ feeds and adding warnings or disclaimers to posts. This was distinctly seen when there were stories such as the Hunter Biden laptop case. The FBI was associated with that case, and therefore, Meta lowered the story’s weight.
Though Meta might consider this moderation aimed at stopping the flow of disinformation, commentators claim that downranking and even suppression of certain types of narratives, especially politically charged stories, will compromise public debate and transparency, especially in situations where there is an ambiguous stance concerning the veracity of the information.
Free Speech and the Democratic Idea of a Marketplace of Ideas
In the remarks made, there is an indication that Meta is equipped with practical mechanisms that can be used to censor or limit content at any given time. Dzhelepov’s comment, “M,eta has the right to silence anything that it chooses,” illustrates social media companies’ grip over what can be accessed by the general public. Such censorship can certainly create a profound dent toward freedom of expression and the validity of free and open-ended discourse, especially where such censorship has a predisposed ideological perspective.
Zoorob has pointed out that the sandbagging of material designated false by fact-checking agencies has become common among conservatives. Several content is a root cause of the debate concerning social media being biased towards certain political affiliations.
Trust and Responsibility in Content Moderation
The dependence on outreach and agencies such as the FBI only for matters regarding what content to take down raises concerns about who the master of the house is when it comes to content management. Will such choices and policy decisions be made openly so citizens can hold the chosen authorities accountable?
Moreover, it raises the issue of uniformity in employing content moderation policies. In instances where only specific political issues or opinions are downranked or suppressed, this may damage Meta’s ethical standing as a platform for objectivity.
Such findings raise questions regarding how transparent Meta is regarding engaging in political conversations, particularly conservative speech. It is alarming what information may be de-emphasized due to external media and government agencies and how powerful Big Tech is in directing political discourse. Issues such as these regarding the moderation of free speech, political bias, and the responsibility of social media platforms are only likely to worsen as Meta and other platforms continue to be caught up with content moderation.