Infrastructure Manifest (v. 1.0) - Susan Brown - Chapter 2 - “Replatforming”
Infrastructure Type Descriptiona

Control point for access to or useb

For example, identify the owner, regulator, manager, etc. as appropriate

Open Source, Open Access, or Public Resource/Utilityc Proprietaryc How heavily used?d Reliabilityd Satisfaction for Userd Ethical concerns (known or possible) Comments

Land infrastructure

For example, properties ceded, unceded, public, private, etc.

Include only principal land infrastructures.

The land on which I live and work is located on the ancestral lands of the Attawandaron people (now dispersed), the Anishinaabe and Haudenosaunee peoples, and the treaty and territory of the Michi Saagiig Nishnaabeg (Mississaugas of the Credit) First Nation. This land falls under the Dish with One Spoon Wampum, a convenant betweem the Anishnaabe and the Haudenosaunee to live peacably and share resources while recognizing sovereignty.

Servers located on the traditional lands of the WSÁNEĆ (Saanich), Lkwungen (Songhees), and Wyomilth (Esquimalt) peoples run the CWRC (including Orlando) and LINCS websites. Servers through which communications and files are routed cross many other Indigenous lands.

  • Ontario Land Titles Act, 1885, and other legislation related to land use.
  • Author, University of Guelph, Simon Fraser University.
Not Applicable Yes 10 10 10 This unceded Indigenous land is governed under Canadian law by understandings of private property alien to those from whom this land was appropriated, whether or not (there are both cases) there are treaties in place. I am listing them as proprietary since both the land on which my home sits, University of Guelph lands, and the lands that sustain my daily life by growing food or supporting commerce are treated as private property under Canadian law.

Materials infrastructure

For example, notable common or rare extracted or other materials used in research, production, and communication.

Break down by notable materials used in devices or tools if appropriate.

  • Aluminum, tin, gold, and rare earth materials embedded in computing equipment.
  • Ink, toner, paper.
  • Apple and other electronics manufacturers.
  • Commercial distributors.
Not Applicable Yes 10 10 10

Exploitative and polluting mining practices that flow from colonialism and extractive capitalism are a major ethical problem. Planned obsolescence drives up consumption unnecessarily. I recycle electronics through the university but am skeptical that the minerals are being safely recycled, if at all. Apple claims to be sourcing an increasing proportion of rare earth metals, but recycling processes are themselves energy-intensive and use toxic chemicals.

Ink and toner are non-renewable and non-sustainable in their components; partly recycled paper is itself recyclable but does little to mitigate climate change.

Energy infrastructure

For example, principal energy sources and infrastructure used in research, production, and communication

  • Public utilities: Electricity, heating, and cooling for buildings mentioned under Architectural Infrastructure (Ontario). Cooling for the servers mentioned under High Performance Computing (mostly British Columbia).
  • Proprietary: Gasoline for cars, diesel for trains, and jet fuel for airplanes.
  • Ontario Hydro
  • BC Hydro
  • Union Gas
  • Esso (Exxon/Mobil)
Yes Yes 10 9 9 As of 2022, the Canadian government reported Ontario energy as generated 6% by natural gas; 9% by wind, tidal, and solar; 26% by hydroelectricity; and 59% by nuclear sources, claiming these as both green and clean. Because of government decisions, the percentage of renewable energy behind the electrical grid has dropped from a high of 94% since 2021 (The Pointer). 2024 BC Hydro reports claim 98% of power being generated from clean, renewable sources, 91% hydroelectric, with only 2% from natural gas for backup generation at peak periods and none from nuclear.

Transportation infrastructure

For example, private or public transportation needed for daily or research trips

  • The Guelph, Ontario and other provincial government road transportation system.
  • Car travel: 2015 Toyota Camry during earlier drafting plus 2019 Toyota RAV4 Hybrid during revisions.
  • Air, rail, bus, shuttle, and taxi travel for faculty and staff travel for meetings, conferences, research, and training.
  • Ministry of Transportation of Ontario.
  • Via Rail.
  • Air Canada and other airlines.
Not Applicable Yes 3 9 8 I have classified these as proprietary because the inadequacy of publicly funded transportation infrastructure makes me reliant upon private vehicles to a regrettable extent

Architectural infrastructure

For example, buildings, labs, rooms

  • My residence was my main workspace during the drafting of this chapter.
  • The Humanities Interdisciplinary Collaboration Lab, a research space that houses meetings, faculty, staff, and students involved in work related to the CWRC and LINCS infrastructures. THINC Lab was established with a grant from the Canada Foundation for Innovation, and housed in the McLaughlin Library (1968), at the University of Guelph.
  • Other spaces in the McLaughlin Library, the McKinnon Building, the Maclachlan Building, the Boarding House Gallery, and the Arboretum Centre that have housed staff offices, retreats, workshops, and conferences related to the infrastructure discussed in the chapter. Faculty offices and meetings rooms.
  • Mortgage with Canadian Imperial Bank of Commerce.
  • University of Guelph Library, the College of Arts, and the Arboretum Centre.
Not Applicable Yes 10 10 10 Although associated with a public institution, the lands and buildings of the University of Guelph are private property.

Civic, community, national, or regional infrastructure

For example, provided by cities, communities, governments, etc.

  • University of Guelph Act (Canada, 1964).
  • Ministry of Training, Colleges and Universities (Ontario, 1990).
  • Budget Implementation Act 1997, Bill C-93 (Canada) that established the Canada Foundation for Innovation (CFI).
  • The Social Sciences and Humanities Research Council [SSHRC] Act (R.S.C., 1985, c. S-12) (Canada, last amended 2012).
  • University of Guelph.
  • Government of Canada funding agencies.
Yes Not Applicable 10 8 6 A tiny fraction of Canada’s dedicated infrastructure funding stream from the Canada Foundation for Innovation (which initially excluded HSS projects entirely) goes to humanities and society science projects.

CFI funding has been invaluable in supporting platform work. Apart from the overall shortage of funding, the biggest challenges relate to operations and sustainability which is a problem across the board but exacerbated in the case of HSS infrastructure because of inappropriateness of user fees, and lower funding overall from other sources, relative to natural and health sciences.

There are both benefits and disadvantages to the extent that infrastructure and research funding are insulated from each other in the current Canadian system. One of the greatest sources of concern at present is the inadequate funding for sustaining infrastructure once built.

Institutional infrastructure

For example, equipment, services, and “overhead” or “indirect cost recovery” items for grants to universities

Local (college/unit-level) and Research Office grants officers, administrators, and financial managers; Research Office policy and strategic administrators; Research Ethics Boards (run by staff and faculty); procurement services; university legal counsel, all paid at least in part through the Research Support Fund that covers some indirect costs of research to Canadian universities based on amount of national research funding received. Universities of Guelph and Alberta; other universities. Not Applicable Yes 10 7 7 Financial cuts to universities in Canada have led to understaffing and overwork in research offices. Working across institutions to produce infrastructure collaboratively is extremely laborious because there are few established means of doing so beyond those associated with grant funding structures, so agility is impossible and creative solutions hard to achieve. This situation in Canada aligns in part with the increasing administrative burden on researchers generally in North America, with principal investigators reporting in a 2012 survey by the US Federal Demonstration Partnership spending approximately 42% of their time on administration, and an ExLibris survey showing decreasing satisfaction with support from research offices and libraries for research-related administration.

Labor infrastructure

For example, type and amount of labor from collaborators, staff, research assistants; include any attributes that seem important, such as unrecognized, unpaid, low-paid, outsourced, or other attributes of labor

The three platforms discussed here with which I have been involved have benefitted from the time and labour required for advice, peer review, and grant evaluation at the applications stage, for uploading, organization, creation, updating, and augmentation of data and metadata by scholars tenured, untenured, precariously employed or independendent, postdoctoral fellows, graduate students and both undergraduate and graduate student research and technical assistants, in addition to that of professional staff, often scholars themselves, engaged in tasks including project management, research software development, administration, maintenance, upgrading and enhancement, user experience design and evaluation, data organization, transformation, migration, and backup, as well as the essential work of communication, training, support, and writing endless grant proposals and reports. Many but by no means all of the individuals in these roles are listed in the credits pages for Orlando, CWRC, and LINCS.

This particular essay benefitted from 16 hours of research assistance from Amelia Flynn for quotation and source checking, invaluable peer feedback from Ariana Ciula, Ann Borda, and David Berry, rigorous editing from Alan Liu, and other editorial work by Alan, Urszula Pawlicka-Deger, and James Smithies.

The Ontario Employment Standards Act 2000 (ESA); Federal Contractors Program; HR policies and collective agreements at various universities, primlarily Guelph and Alberta. Not Applicable Not Applicable 10 8 7

The deplorable precarity associated with almost all scholarly infrastructure staff roles, even long-term ones, associated with research software infrastructure is a structural reflection of the liminal position such infrastructure within the academic system within Canada and in many but not all other contexts.

The precarity of many new scholar positions and shortage of tenure-track appointments, coupled with the ineligibility of contingent faculty or professional staff to hold most grants, increase the challenges with respect to continuity and succession of scholarly infrastructure leadership, especially given the heavy workload involved.

Inestimable in quantity. The number of people involved is and who have supported the Orlando Project, CWRC, and LINCS projects approaches a thousand. Their efforts amount to well over a hundred person years of directly remunerated labour and much other indirectly remunerated or volunteer labour of faculty and colleagues.

Reliability and satisfaction scores reflect the precarity and shortage of funding, and in no way indicate lack of reliability or the performance of the individuals involved, who are typically generous in the application of their many talents as well as dedicated in ways that are hardly justified given the uncertain or temporary nature of their positions and the lack of institutional recognition and support.

Research-content infrastructure

For example, physical libraries and archives, online research materials, shadow libraries, etc.

  • McLaughlin Library at the University of Guelph, the lending services of the Ontario Council of University Libraries (OCUL), and the Interlibrary Loan system.
  • Electronic library content licensing and shared infrastructural services provided by the Ontario Council of University Libraries and the Canadian Research Knowledge Network.
  • The Web.
University Libraries and IT systems Yes Yes 10 9 8

The various control points for access to content point to the contradictions of our historical moment with respect to information ownership and sharing.

The University of Guelph Library promotes and supports open access to knowledge in many contexts, yet the library’s site makes me reauthenticate exponentially more than either the student registration and records or the financial systems to access any content through the library portal, including open access content as well as proprietary content. Such control mechanisms work against interlinking and interoperability of online resources.

Google allows me to find and access content without direct cost while surveilling me and harvesting my search data.

Tools infrastructure

For example, principal analog tools and digital tools, scripts, or protocols used for research, writing, communication, production (excluding high-performance computing, for which see below)

  • Devices: Apple iMac then MacMini with external monitor, HP LaserJet printer, and iPhone 10 then 15 used for composition and revision. Orlando, CWRC, and LINCS team members all have some configuration of personal computing equipment and smartphones that they use for their work.
  • W3C standards and protocols; CIDOC-CRM
  • Linux, Mac O/S and Windows O/S
  • Word Processor: Google docs; Microsoft Word.
  • Cloud storage: Dropbox, Google Drive.
  • Communications: Gmail, Outlook, Slack, Zoom, Google Meet, Teams.
  • High-speed internet on campus, at my residence, while in transit and travelling.
  • Campus IT services at the University of Guelph.
  • The Canadian CANARIE high-speed research and education network.
  • The global Eduroam network access system.
  • Commercial ISPs: Rogers and Bell.
Yes Yes 10 8 8

The infrastructure development and operations work that is the substance of the essay demands real-time meetings that take up the bulk of my workweek and much time from team members. Whereas I had been doing most such meetings from campus, the pandemic produced such reliance on high-speed internet in my home that we ended up with two ISPs in order to try to compensate for service outages in our main provider. However, the situation only improved slightly and is apparently insoluble given the aging infrastructure in my old section of Guelph.

Although I tend to think of services such as Eduroam as free and open, I recognize that access to them relies upon privileges accorded by my institution and its underlying economic relationship to other institutions and organizations within the knowledge ecosystem.

Networked Platforms infrastructure

For example, major networked or cloud platforms used for research, storage, analysis, sharing, communication, publication—Google Drive, Dropbox, AWS, etc. (excluding high-performance computing, for which see below)

For personal research and infrastructure team collaboration:
  • Communication: Slack; Outlook; Gmail.
  • Design: Figma; Adobe Creative Cloud.
  • File storage and sharing: Google Drive, Dropbox.
For long-term preservation:
  • Ontario Library Research Cloud, which uses OpenStack Swift, Horizon, DuraCloud, and Archivematica, for CWRC via the University of Alberta.
  • The same for LINCS via the University of Victoria plus Borealis, the national research data repository system of Canada.
Major open-source library/software dependencies:
  • Orlando and CWRC (https://github.com/cwrc/CWRC-Schema; https://gitlab.com/calincs/cwrc)
    • AWS (used by LEAF partners and Orlando front-end publication by Cambridge University Press)
    • BaseX
    • DBpedia
    • Docker
    • Drupal
    • Express
    • Figma
    • Google Analytics
    • Google Sheets
    • Keycloak
    • Node.js
    • Oxygen
    • React
    • Wordpress
  • CWRC/LEAF (https://gitlab.com/calincs/cwrc; https://github.com/cwrc/; https://github.com/LEAF-VRE; https://gitlab.com/calincs/cwrc/leaf)
    • Docusaurus
    • Drupal
    • Figma
    • Github
    • Google Docs
    • Google Sheets
    • Islandora
    • Isle
    • npm
    • OpenRefine
    • Oxygen
    • TinyMCE
    • Trello
    • VSCode
    • Voyant
  • LINCS (https://gitlab.com/calincs)
    • 3m
    • Adobe Creative Cloud
    • Blazegraph
    • Corpora
    • Diffbot
    • Docker
    • Docusaurus
    • Figma
    • Fuseki
    • Getty Vocabularies
    • Geonames
    • Google Sheets
    • IBM Tivoli
    • Jupyter Notebooks
    • Keycloak
    • Kubernetes
    • Leaflet
    • Matomo
    • Next.js
    • Node.js
    • NanoID
    • NPM
    • OpenRefine
    • Postgres
    • Rancher
    • React
    • ResearchSpace
    • Skosmos
    • SpaCy
    • Spark
    • TypeSenseVIAF
    • VSCode
    • XTriples
    • Wikidata
    • Yasgui
  • Gitlab: 50 seats through an open-source grant that enables CI/CD processes to facilitate collaborative development.
  • Rare social media use through Facebook, Twitter/X, Bluesky (excluded from ratings).
University and personal subscriptions/accounts with proprietary software corporations. Yes Yes 10 8 8

I admire the dedication of some colleagues to using only open-source platforms, but ease of use, convenience, institutional demands have pushed me and our team towards a number of proprietary services for daily work.

For the platforms, economic, ethical, and sustainability considerations have led us to use almost entirely open source software, to adopt or adapt where possible existing tools, frameworks, and systems, and to contribute back to open-source efforts as resources permit.

Access to open software is often through proprietary code repositories such as GitHub and GitLab.

Ostensibly free access to proprietary platforms comes at the cost of allowing them to use my data.

This is not a comprehensive list of the open-source resources on which we rely; for instance, scripting languages are not listed here. Further details can be found in the code repositories provided in brackets for each platform.

It feels important to me to indicate the extent to which the open-source platforming projects described here are dependent upon the broader open-source software community. Such dependencies have downsides: for instance, delays in expected developments create downstream delays in platforms that depend upon them, and if software becomes unsupported it then becomes a liability with respect to upgrades and, potentially, upgrades. However, the difference between the initial homegrown back-end production system created by Orlando in the later 1990s and the polish, scalability, and versatility of CWRC has everything to do with the growth of the open software movement.

High-performance computing infrastructure

Here defined expansively to include the use not just of supercomputers and computer clusters or grids but any high-performance computing infrastructure or special GPU and other processors exceeding the capabilities of an individual workstation, laptop, or server.

CWRC resource allocation (includes Orlando production resources) through national infrastructure:
  • HPC allocations [rpp-sbrown-ab : Manage RAP memberships]
  • 77 TB of /nearline storage on the graham-storage system
  • Cloud allocations [cpp-sbrown : Manage RAP memberships]
  • 99 VCPU-years on the arbutus-persistent-cloud system
  • 12 Number of cloud instances on the arbutus-persistent-cloud system
  • 197 GB of RAM on the arbutus-persistent-cloud system
  • 2 Floating IP addresses on the arbutus-persistent-cloud system
  • 40,000 GB of cloud volume and snapshot storage on the arbutus-persistent-cloud system
LINCS allocation through national infrastructure:
  • 280 VCPU-years on the arbutus-compute-cloud system
  • 23 cloud instances on the arbutus-compute-cloud system
  • 2,370 GB of RAM on the arbutus-compute-cloud system
  • 20 volumes on the arbutus-compute-cloud system
  • 20 Floating IP addresses on the arbutus-compute-cloud system
  • 40 TB of cloud shared filesystem storage on the arbutus-compute-cloud system
  • 15,000 GB of cloud volume and snapshot storage on the arbutus-compute-cloud system
  • 8 VCPU-years on the cedar-compute-cloud system
  • 1 Number of cloud instances on the cedar-compute-cloud system
  • 30 GB of RAM on the cedar-compute-cloud system
  • 2 volumes on the cedar-compute-cloud system
  • 1 Floating IP addresses on the cedar-compute-cloud system
Digital Research Alliance of Canada through the Research Platforms and Portals competition. Not Applicable Yes 10 8 7 Some positives on the ethics front: because DRAC is a shared system, most unused resources can be allocated to other projects that need them, which is better in terms of environmental footprint. Data is housed on Canadian servers and therefore not subject to the Patriot Act, which is a consideration for some projects, especially Indigenous ones.

These are not HPC systems in the conventional sense: the projects described here use such systems rarely for focused data processing. However, in Canada they are considered ""Advanced Research Computing"", a term used for research computing that goes beyond the capacities of desktop systems, and provided by the same organization, the Digital Research Alliance of Canada. All university faculty members and librarians at public Canadian colleges and universities are entitled to a default resource allocation through the Alliance, and larger allocations such as these are awarded through competitive peer-reviewed application processes. DRAC manages the machines and monitors for cybersecurity risks. Systems administration and management from the O/S up is the responsibility of the user. Downsides included failures due to aging equipment, reliance for certain fixes on DRAC staff, lack of features such as 24/7 coverage, software-as-a-service, and infrastructure-as-a-service which would reduce the our operating costs.

Actual use fluctuates with need and there is some flexiblity in the use of allocations; as of September 2024 LINCS is using significantly less than its allocation and CWRC is in some areas using more.

Usage at time of writing this manifest in 2024:
  • CWRC
    • 103 VCPUs
    • 16 cloud instances
    • 190GB RAM
    • 15 IP addreses
    • 31,000 GB cloud volume storage
  • LINCS
    • 238 VCPUs
    • 24 cloud instances
    • 1.7TB RAM
    • 11 IP addresses
    • 100 GB cloud volume storage
Other infrastructure Not Applicable Not Applicable

This infrastructure manifest was completed by Susan Brown and reports on Chapter 2, “Replatforming.”