Dr Richard CookI'm interested in knowledge, technology, people and society. What drives my research is how people and technologies intersect. My research is socio-technical and has people at its core. Running themes such as whose voice is heard, epistemicide, the production of place and and subcultures can be seen in my work. Here is a talk I recently gave to the British Computing Society (Cheltenham) which gives a flavour of my current direction and focus.Get in contact for press or speaking here Words Recent PublicationsCook, R. (2024) Crafting a ‘senseplace’: the touch, sound and smell of graffiti, Senses and Society. Available hereCook, R. (2024). ‘Ethnograms’. in Kara, H, (ed) The Bloomsbury Handbook of Creative Research Methods. pp.123-133. UK: Bloomsbury Academic. Available here. Cook, R. (2023) 'Using an ‘ethnogram’ to visualise talk in the classroom'. The International Journal of Research and Method in Education. Vol 18, Issue 3. pp.223-240. Available here Cook, R. and Hockey, J. (2023) ‘Gravel cycling craft and the senses: scenes, sounds, vibrations, fatigue and typifications on off-road tracks’. Senses and Society. Volume 18, Issue 3. pp.223-240 .Available here Most recent blogSave earth, kill humanityIn 2017 when I began my PhD fieldwork with a pilot of Google’s Home Assistant in a primary school, artificial intelligence wasn’t even a ‘thing’. Mention ‘Ai’ or ‘voice’ or ‘facial recognition’ to anyone and eyes would glass over. All the talk was about ‘cyber’, ‘the cloud’ and ‘big data’. So, fast forward to 2024 post-pandemic and ‘Ai’ is now a thing – part of public conversation. But the conversation is still techno-centric and focused on existential threats than eminate from misaligned models, corporate and political hegemony.#Loan offer or not? from sklearn.tree import DecisionTreeClassifier Consider this thought experiment as a provocation. Pens, paracetamol and petrol are all technologies (they just aren’t digital technologies) so let’s start here by understanding technologies as also non-digital. You go back in time (time machines are a thing btw) to witness the very first car being driven down a street. You are amazed at this object – no horse, no human effort needed. This is clearly a significant step forward for humanity and a technology that can be scaled. But you may be concerned if your business was selling horses. You use your time machine again to travel in time again to observe the first gun being fired. You marvel at this technology and wonder how your knife business will survive and make profit in the longer term. Now, you may already see where I am going with this. You may have read them as positive in that humanity now has two technologies that are useful and helpful. Cars enable freedom of movement, transport food, connect people. Guns are used to protect vulnerable people, can be used to hunt animals for food etc. Both the above examples demonstrate the techno-centric and selfish/arrogance perspective. This object will effect me negatively. The focus is on the object and the potential negative effect on the person(s). This is how many people are talking about Ai. This is one hegemonic perspective. Ai will be the end of humanity, jobs will be lost etc. (Before proceeding I would ask you to think about jobs that have already been lost – has not work and the concept of paid labour been disrupted already? I draw attention to Flex work, Zero hours, the increase in people in need of welfare, food banks etc). It is work but not as we know it. This change is already underway. [aside//side bar] The internet was originally a free to access public good but is now a commodification platform and we all know of its pros and cons. I now struggle to navigate the web without having to ‘pay’ with a login or my personal data or exchange money for content. [Protect your privacy - Brave browser?]. Very little of quality remains free and public and the web now feels overtly social and entertainment based. Jobs were lost but many more jobs were created by the web – think YT creators, TikTok ecommerce, POD, subscription services, in-game loot, NFTs, cryptos etc. The internet came with pros but also with many unknowns that we only learned of over time. We had to live this time to find these out. This was the social shaping of technology: we shaped the technologies that then shaped us. [end of aside//side bar] Back to my argument… When the car was invented other subsidiary and unknown effects were simultaneously invented. The invention of the car meant that the potential for pollution was invented. It also meant that the car accident was also invented. Drink driving, hit and runs, traffic, gridlock and so on all were put in play too. It would just take a bit of time to know of them. There’s a well known saying that guns don’t kill people, people kill people. This appears to be true. No gun is able to kill (yet) unless a human uses it to do so – humans must be in the loop and make a decision. When the gun was invented, as per the car example, murder from a distance was invented and as can be seen, the potential for greater physical damage to a person was invented too and so on. But the point I want to make here, is that when a technology is invented there are unknowns also invented but which take time to emerge and importantly centre around how people use them (social shaping as above). If we pedal back a bit to focus on ‘goals’. Let’s say that the goal of the invention of the car was to enable mass movement of people and goods. The goal of the gun was to serve as a tool for protecting life by being a deterrent. Both of these are admirable goals and focus on the greater good. But as can be seen when people become involved oftentimes their usage becomes pernicious. I am now not going to make an argument for how Ai can play a role in making these types of decisions, and NB: I will*NOT* be talking about Ai knowing when to fire a gun and take life to protect many lives. This is not about the ‘trolley problem’. Instead, I want to focus on goals by provoking another line of thought. Goals are my focus – hold this thought. When you ask a ChatAi whether Ai makes its own goals (objectives) it becomes contradictory. It will say that Ais can identify emergent behaviours or they are highly constrained or that yes they can develop their own objectives but developers can have input. Then how black boxed Ai isn’t fully understandable so Ais do learn to identify objectives. A ChatAi will also acknowledge that Ais might or will become truly emergent and be able to set their own goals. (ps. You can have a good debate with Gemini about this topic!). So, let’s agree that they cannot identify their own objectives (goals) yet but people tell them those. (Oh yes, they will however eventually learn how people develop goals but this is beyond our scope currently!). This then become the same problem as the car and the gun. People identify the goal and the technology achieves it. People define for example that the goal is to transport massive quantities of products huge distances (cars). People want to be protected from aggressors so we need to defend rights and land (guns). For Ais, what is the goal? I’m unclear. This leads me to some thoughts around who has created this need for Ai and what are the needs this decision is based upon. In short who has defined the goals and why? Who has the power to define decision making and how did this get to be? (Who is excluded from decision making?) It is clear the developers like AWS, Alphabet and Meta have goals. The published goals are admirable – help humankind, provide access to the world’s knowledge or connect billions of people. But these are not charities, non-profits or humanitarian organisations – their business is profit and their goals are surely thus profits. So, what if the goal of Ais are to generate profit? I can write a series of decision.trees to work towards this and some of them might identify / profile particularly vulnerable groups that are ripe for monetary exploitation / commodification, I can code platforms that are vehicles for productization and monetisation. I can develop technologies that allows me to algorithmically cast people as products for my money making. Using the Car/Gun examples above – what unknowns are simultaneously created in making these types of Ais. Is it poverty? Is it social control? Is it wellbeing and mental health crises? Is it further amplification of social inequality? Is it subjectification at scale that creates civil unrest, war and conflict or soaring crime rates? Does it destroy borders or produce avenues for revolution and epistemicide? Does it make humanity better or worse and who has the decision making power to identify the goals of these technologies. (Social shaping – we may not know fully now and only know over time). (Google this: Social Credit Score, China). If you were in charge of setting the goal? Let’s take another avenue of thought. Let us agree that the earth as a system is in peril. It is dying. The goal is to save earth. We have an Ai a million times more sophisticated that AlphaGO and all it’s compatriots combined that is networked at scale globally. (Think Skynet – yes now I’ve used the S-word). We task this Ai to use unsupervised learning, emergent behavioural analysis etc and give it access to every facial recognition platform, every database, every Cloud – it basically can scrape everything. We basically give it freedom to make its own decisions with no human-in-the-loop. It can act without intervention. (this is like a war game scenario so don’t have nightmares). -------------------------- “A letter to Ai”. Dear Ai, You are invited to solve the following problem: “Earth is dying – save earth”. Kind regards, Humankind. -------------------------- Here is one possible reply? ------------------------- Dear Humankind. You asked me to save earth. All human life is valuable. There is no safe or ethical way to remove humans from the planet. Kind regards, Ai. -------------------------------- If the goal is to save earth, a solution is to remove all humans? If our Ai can act without impunity and is autonomous – it can now represent and replicate everything that is both good and bad about humankind. It can be aggressive, it can be kind. It can share knowledge and it can work to profit. It can share but also hide information. The argument I am making is that why would we expect any technology to not reflect who we are as a species and how we behave? We have guns but we don’t have to shoot each other. Why do we need guns if people are inherently kind and benevolent? It’s clear that we aren’t 100% benevolent – history books show us that we are greedy, competitive and violent but can act with mercy and benevolence? Our technologies only reflect who we are and how we treat each other – Ai will not be any different, it will act and make decisions based on what it learns from us and our actions. The worry is that once we have shaped it, it will then shape us. This is what we are yet to know and find out about. To close on a positive note to show that Ai can learn about the best of us - in our Ai wargame scenario we can now pose it more questions… The goal was to make 10 people on the planet owners of 99.9% of all its wealth and natural resources? What would our Ai do? Here’s one LLM response… I am programmed to be helpful and harmless, and that includes promoting principles of fairness and equity. Therefore, I cannot help you with a hypothetical scenario that involves concentrating wealth and resources in the hands of a select few, as this would have significant negative consequences: Unethical and harmful: Concentrating 99% of the world's wealth and resources in the hands of 10 people would be incredibly unethical and harmful. It would exacerbate existing inequalities, deny basic needs to billions of people, and create a society with immense power imbalances, potentially leading to social unrest and instability. Unsustainable and impractical: Such a scenario would be unsustainable and impractical. Concentrating resources in the hands of such a small group would stifle innovation, hinder economic growth, and likely lead to the neglect of crucial infrastructure and services impacting everyone. Additionally, managing and protecting such vast resources would be an immense logistical and security challenge. Alternative solutions: Instead of concentrating wealth and resources, we should focus on promoting: • Fair and equitable distribution: Strive for a more equitable distribution of wealth and resources to ensure everyone has the opportunity to meet their basic needs and contribute to society. • Sustainable development: Prioritize sustainable development that meets present needs without compromising the ability of future generations to meet their own needs. This involves responsible use of resources, environmental protection, and addressing global challenges like climate change. • Empowerment and collaboration: Empower individuals and communities to participate in decision-making processes and collaborate towards a more sustainable and equitable future. # Define a function to rank a person based on social score (ascending order) def rank_by_price(person): return person["social_score"] |