Author Archives: Maira Rehman

Class Discussion: Platform Infrastructures

This week’s readings cut through the technical jargon surrounding “software platforms” describing them as intermediary, second-order infrastructures that build upon physical infrastructures that usually take between thirty and one hundred years to fully develop. According to Edwards, such platforms (for example Facebook and YouTube) spread across the globe like wildfire due to their strategic architectural features which allow the platform’s core components to interface with new and innovative complementary components as needed. A defining feature of these platforms is that they position themselves as neutral spaces for user activity, “what we might call ‘platform discourse’ is to render the platform itself as a stable, unremarkable, unnoticed object, a kind of empty stage” (Edwards 319). Kurgan et al., however, contend in “Homophily” using Facebook’s example that the way these platforms operate is far from neutral. Social networks like Facebook group people with similar interests, showing them friend suggestions and content based on their interests only thereby creating a feedback loop that doesn’t allow for diverse viewpoints or knowledge. Such algorithmic practices on a large scale over time lead to polarization with these software platforms having a very real impact on societal and cultural perceptions. Showing users only content that they like while ignoring the adverse impact that has on society at large is because user activity on these platforms yields raw material in the form of data. According to Srnicek, capitalism in the twenty-first century is centered around the extraction and use of data for advertising purposes, worker optimization, etc. Platforms are so important now because they facilitate the collection of this data.

Question One: In Platform Capitalism Srnicek notes that advertising platforms sell users’ online activities to adeptly matched advertisers to sell consumers more products. Has there ever been a case where the data from these software platforms was used to specifically create products that match users’ needs instead of just selling them existing products? Would data extraction be a less exploitative process if that information was used to create products based on trends that indexed what consumers wanted?

Question Two: In the Age of Surveillance Capitalism Zuboff contends that surveillance capitalism was invented in the US though the consequences now belong to the world. Considering his “no exist” metaphor, would Gen Z outside the US in less developed countries like Kenya and Ghana have more of a backstage to nurture the self since these platforms would not be used as readily there due to technological limitations/internet issues as they would have been in the US?

Question Three: Srnicek points out that now companies like Amazon and Microsoft own the software instead of the physical product allowing them to employ a subscription model for their users. Considering these subscription services necessitate continuous internet access would people in developing countries be at a disadvantage because of such a requirement? Could not owning a product for indefinite use—as opposed to the subscription model—hinder people from less technologically advanced regions from making use of it?

Blog Post – Tracing a Knowledge Infrastructure

Academic journals are supposed to help in the dissemination of knowledge through the publication of high-quality research. Such scholarship, however, is often behind paywalls for example the Modern Language Quarterly, PMLA, and the South Atlantic Quarterly are all journals that need subscriptions or university access to read the texts hosted on their platforms. According to the Duke library website, the average cost for one highly cited article for “an unaffiliated researcher is $33.41” (“Library 101 Toolkit”). This makes research, especially for budding academics not associated with a university library, an expensive endeavor. This is where open-access journals come in. Open-access journals allow free and immediate use of academic books, articles, and other texts without access fees “combined with the rights to use these outputs fully in the digital environment” (“Springer Nature”).

The Journal of Cultural Analytics is one such online open-access journal dedicated to promoting scholarship that applies “computational and quantitative methods to the study of cultural objects (sound, image, text), cultural processes (reading, listening, searching, sorting, hierarchizing) and cultural agents (artists, editors, producers, composers)”. It published its first issue in 2016 featuring three sections: articles that offer peer-reviewed scholarship, data sets about discussions associated with new data related to cultural studies, and debates regarding key interventions surrounding the computational study of culture. This open-access journal aims to “serve as the foundational publishing venue of a major new intellectual movement” and challenge disciplinary boundaries (“Journal of Cultural Analytics”). It allows authors to retain the copyright of their published material and grants itself the right of first publication with their work under the Creative Commons Attribution 4.0 International License (CCBY). Due to its open-access model, authors do not get paid for publishing with it. One of its very first articles, “There Will Be Numbers” by Andrew Piper has to date garnered 2852 views and 602 PDF downloads (“Journal of Cultural Analytics”).

This journal is published by McGill University’s Department of Languages, Literatures, and Cultures. Its editor is Andrew Piper, a professor at McGill’s Languages and Literatures department and director of the Cultural Analytics lab. Its remaining editorial board is made up of digital humanities professors from across different North American universities such as UT Austin, Cornell, CUNY etc. Despite its affiliation with McGill, this journal is hardly ever mentioned in the university’s halls nor are students made aware of its existence. The most I heard about it was in my “Introduction to Digital Humanities” seminar when one of my class readings was from this journal. Thus, there seems to be almost no effort on McGill’s part to promote this journal to students. Delving deeper into this issue, the problem seems to be that despite its interdisciplinary nature this journal is viewed as catering to a niche audience i.e., digital humanists and not all humanities scholars. Considering it is published by a particular humanities department at McGill instead of in collaboration with all such programs might explain why it is less known and promoted. As an open-access journal, especially one in a computational humanities field, it should be better advertised to students at least in the university that publishes it so that they may better avail its resources.

Blog Post – Infrastructure, Technology, Ecology

Jackson, Mukherjee, Lally et al., and Ensmenger all emphasize the embeddedness of information technology systems in their respective texts. Whether by calling attention to how progress and innovation are layered upon invisible processes of repair or by highlighting the physical and environmental costs associated with housing Bitcoin miners, these texts underscore the material realities of contemporary technological advances. As these readings render visible the physical infrastructures undergirding today’s digital business ventures, they situate these digital advancements in the larger environmental and political landscape of the world—far away from idealized imaginings of the Cloud as essentially free and easily available. Doing so shifts focus to the material costs of the digital age. Not only is the planet heating up faster (if the Cloud was a country it would be the sixth largest electricity consumer on Earth) but the computers and devices required to make use of the Cloud are dependent on mining rare minerals in politically contested regions. It seems impossible then to separate the internet and any businesses that depend on it from the environments in which they are situated. Interrogating these current advancements in terms of their material effects is therefore necessary for any scholarship that engages with these technologies.

Question One: In “Rethinking Repair” Steven J. Jackson gives us the term “broken world thinking” to understand the problems facing new media technologies scholarship. Given the accelerated pace of current technological advancements, are we as scholars in need of new terms to interrogate and explain the shifting digital landscape? Can we realistically keep up with the ethical, environmental, and political questions brought about by these ever-changing technologies?

Question Two: The exploitative practices of cryptocurrency miners described as “infrastructural parasites” by Lally et al. is at odds with how the public views them. The environmental and infrastructural costs needed to support such practices are absent from cultural understandings of this profession. What does the narrative surrounding the internet in a broader sense and cryptocurrency in a more specific one tell us about the reasons why the material costs of these practices remain largely invisible?

Question Three: According to Ensmenger in “The Cloud is a Factory” industrialization and subsequent advances in technology meant that “new machines did not replace human workers; they created new forms of work that required (or at least enabled) the mobilization of new types and categories of labor. Whether it was the new machines that drove the search for new labor or the availability of new labor that encouraged the development of new machines is not relevant. The elements of the new industrial order were dependent on one another. That is what industrialization meant: the recombination of new machines, new organizational forms, and new forms of labor” (39). How might this reasoning help us understand AI now? Can we view advances in this technology form as being driven by the forces of a postindustrial society?

Personal Narrative

Google Books is how I search for new academic texts and titles. While there are more targeted, specialized knowledge repositories for academic audiences like JSTOR and ProQuest, for me Google Books’ advantage lies in the fact that it holds approximately 40 million titles in more than 500 hundred languages made up of both fictional and academic texts. It is an ambitious project by the Google team, albeit at times steeped in legal controversies and copyright issues.

Finding texts on Google Books is easy. The “Any document” and “Any time” features let users select what kind of text they are looking for—“Books, Magazines, Newspapers”—and the time frame that text would have been published in. For older users who at times are resistant to change, Google Books allows the option of using their old web interface instead of the newer, updated version. Moreover, as an aid to academics, this service enables citations to be exported.

The true promise of this platform is that anyone, almost anywhere can access this huge library of texts (with the acknowledgement of course that full access to all these texts is not possible). The implicit understanding is that Google is preserving these texts, is reducing the physical space bound books take by storing them in the ether, up in the cloud, or wherever one imagines the internet to exist. This belief, however, is unaware of data centers and the physical space they occupy so that we may free up more space in libraries or our bookshelves at home as we make use of platforms like Google Books. It is ignorant of the maintenance costs of running these data centers and the invisible infrastructure that supports knowledge queries of the digital age.

Google Books’ potential to democratize knowledge is real; the access this service provides is unprecedented. What is also real though is how easily this access can be disrupted through both physical and political means. Internet connectivity issues recently occurred when sharks damaged underwater optic fiber cables. Political changes in the global landscape took place like Russia’s increased internet censorship during the war on Ukraine. What these examples highlight is that internet access—and by default access to knowledge repositories like Google Books—is tenuous. Access can be revoked at any time, outside of a user’s control. Thus, while knowledge is arguably within easier reach of most people their ownership over it has diminished. We can search up titles on Google Books and read texts, but we cannot always be sure that access will not be revoked because of physical or political issues. None of this, however, is meant to be a condemnation of the digital age or the tools that exist because of it. I only wish to check our expectations and interrogate the promise of online platforms like Google Books. The systems in place needed to operate these platforms are vast and it is time that we as users were more aware of what we take part in using these online knowledge repositories.