during the week 2 workshop, i developed my portfolio website. we were encouraged to use a template to build the structure of the site, which helped me quickly organise my pages and start thinking about the overall layout. i attempted to do as much of css without the use of a template.
working on the website also gave me a chance to experiment with html and css, practising how to structure sections, insert images, and format text effectively. i found it useful to see how changes in the code immediately affected the layout and design when viewed in a browser.
overall, week 2 helped me establish the foundations of my online portfolio and gave me confidence in managing content and structuring a website for ongoing development.
as part of my exploration, i experimented with web scraping using webscraper.io to collect data from smaller ecommerce boutiques. i found it powerful to automatically gather large amounts of data in minutes—something that would take hours manually. i ran into challenges like dynamic content and messy html.
ethically, i ran into difficulties with scraping images, which made me realise the importance of privacy and security on these sites. i focused only on publicly available information and avoided overloading the sites.
overall, the workshop helped me begin developing my online portfolio while also highlighting how web scraping can be an efficient data collection method, but one that comes with both technical and ethical responsibilities.
for this workshop, our group worked with the university-led data collection scenario, focusing on how students engage digitally across learning and assessment contexts. we designed a short survey to collect data on students’ use of generative ai tools such as chatgpt and grammarly, aiming to understand patterns of engagement rather than individual opinions.
working through this task made me realise how much power lies in defining what counts as data. decisions about which variables to include — for example, usage frequency or purpose — directly shaped the kind of story our dataset could tell. i also became more aware of how institutional perspectives influence data collection: in this case, our approach reflected the university’s interest in productivity and engagement metrics rather than students’ creative or critical uses of ai.
the process raised questions about bias and representation. even when aiming for neutrality, our survey assumed access to digital tools and a shared understanding of ai, which risks excluding certain groups. this showed me how data can reinforce existing hierarchies if ethical and contextual factors aren’t considered.
creating and visualising this dataset helped me see how design and interpretation are inseparable from analysis — the choices we make in collecting and displaying data always shape the narrative that emerges.
this workshop made me more aware that algorithms do not simply describe us; they actively produce versions of us. thinking in terms of input → process → output clarified how small, repeatable behaviours (searches, likes, timestamps) accumulate into an algorithmic identity that is both fluid and persistent. the exercise of inspecting advertising profiles exposed how platforms reduce rich, messy lives into a set of categories for marketing and optimisation—useful for platforms, partial for people.
tasking myself with sumpter’s manual-scraping method was especially revealing. manually classifying 15 posts for 32 friends was slow, messy and necessarily interpretative: every decision about category boundaries, edge cases and what to exclude shaped the result. the method highlights how seemingly objective datasets depend on subjective labour — and how those labour choices ripple into the graphs we make.
ethically, the workshop underlined two points. first, the data we give away (and the data generated about us) privilege some behaviours and invisibilise others; second, regulatory tools such as data-request rights matter, but they don’t remove power imbalances.