top of page

T5 WORKSHOP WEEK 1

  • Feb 14
  • 7 min read

Updated: Feb 28

Self-portrait
© Melanie Meggs, Self-portrait in the Sunflowers (2023), digital photo.

In this workshop I am using experimentation as a method of inquiry into how emerging technologies can restructure my existing practice. For this Reflect and Share, I am testing how algorithmic image processing and time-based media can extend this investigation beyond static representation. Through preliminary trials with tessellated self-portraiture and sound-based experimentation, I want to explore how new technologies can produce a visual and spatial effect that reflects fragmentation, instability, and uncertainty within my self-image. Rather than focusing solely on technical novelty, I am testing how digital manipulation can deliberately disrupt coherence, misalign structure, and introduce tension into the portrait. The aim is to identify which technologies most effectively translate a psychological state of dislocation and unpredictability into material form. In previous projects, I used Midjourney to manipulate photographs in relation to broader cultural narratives of Australian beach identity.



Painting of 2 girls on a beach
© Melanie Meggs, Strong Currents: On this Beach, She Held Her Own Moment 3 (2025), watercolour on plexiglass.

For this project, the inquiry becomes inward. The focus shifts from representation to structural processing, examining how algorithmic systems fragment, repeat, and condition the visibility of the self.



Self-portrait
© Melanie Meggs, Me at almost 51 (2026), digital print.

I am testing a few different technological approaches using my black-and-white self-portrait as the source image.


This stage is exploratory and comparative. The aim is to determine which system most effectively translates a psychological condition of instability and fragmentation into visual structure.


The portrait is direct and confrontational for me, but not in a neutral way. I struggle with seeing myself in an image. Although I present as forthright and assured, I hold a rigid internal standard of how I believe I should appear.


Turning fifty has intensified that tension. Age has altered my face in ways that feel unfamiliar. Stress, illness, motherhood, responsibility, and menopause have marked the body and shifted the surface I once recognised as stable.


The camera records without negotiation. It does not soften memory or align with self-perception. When I look at this portrait, I do not simply see documentation; I see evidence of time, fatigue, endurance, and change. I see how external pressures have shaped both appearance and thought. There is a gap between how I imagine myself and what the lens reflects back. That gap is uncomfortable for me.


This discomfort is central to why this image becomes an effective baseline for disruption. The portrait already carries tension between identity and perception. The technological fragmentation I am proposing does not create instability from nothing; it amplifies an existing fracture. The manipulation of the image parallels how life stages, bodily change, and social expectation have reorganised my sense of self. The tessellation or digital displacement becomes an extension of that internal conflict rather than a decorative intervention.


Asking Chat GPT:


a chapGPT discussion
OpenAI 2026, ChatGPT (February 2026 version), large language model, OpenAI, San Francisco, viewed 14 February 2026, https://chat.openai.com

ANSWER: Below is a structured list of alternative technologies you could use to achieve tessellation, fragmentation, displacement, or destabilisation effects on your self-portrait. I’ve grouped them by level of control and conceptual alignment with systemic fragmentation. Algorithmic / Generative Systems (External Logic Driven)


  • Write code to divide the portrait into grids.

  • Randomise tile positions or introduce probability-based shifts.

  • Can create a puzzle that never fully resolves.


Conceptual strength: True systemic autonomy.

Limitation: Requires coding literacy.


Processing aligns conceptually with my investigation into systems and instability because it would allow my self-portrait to be fragmented through autonomous code. However, I cannot use it. I do not have coding skills and I am not capable of learning to code at the level required. To realise this option, I would need to pay someone else to build it. For this workshop, it is not a viable technology for me.



  • Node-based real-time visual processing.

  • Create live tessellation, feedback loops, kaleidoscopic splits.

  • Can respond to sound input.


Strength: Strong alignment with systems theory.

Limitation: Technical learning curve.


TouchDesigner aligns strongly with my investigation into systems. The ability to create live tessellation, feedback loops, and sound-reactive fragmentation directly reflects my interest in instability and systemic mediation. Conceptually, it is an excellent fit. However, it presents a significant technical barrier. The learning curve is steep, and I do not currently have the skills to use it effectively within the timeframe of this workshop. Like Processing, it would require substantial training or external assistance. For now, it is not a practical option for me, despite its strong theoretical alignment. But it is something I may investigate further for future options.

  • Audio-reactive visual fragmentation.

  • Portrait distorts in response to breath or glitch sound.


Strength: Embodied instability.

Limitation: Complex interface.


Max MSP would allow my portrait to fragment in response to live sound, such as breath or glitch noise. The idea that the image destabilises through embodied input aligns closely with my interest in instability and psychological tension. It introduces a direct relationship between body, sound, and visual disruption. However, the interface is complex and highly technical. I do not have experience with this software and learning it would require significant time and specialised knowledge.


Controlled Digital Fragmentation (Authored Manipulation)



  • Displacement maps.

  • Grid slicing and animated misalignment.

  • Looping unresolved reconstruction.


Strength: Precise control over instability.

Limitation: More visibly constructed.


Adobe After Effects is a more practical option for my workshop investigation. Using displacement maps, grid slicing, and animated misalignment, I could deliberately fragment my self-portrait and create a looping sequence that never fully resolves. This would allow me to control the degree of instability while maintaining recognisability. The strength of this approach lies in precision. I can decide exactly how and where the image fractures, which supports a focused conceptual outcome.



  • Manual slicing and offset layers.

  • Mosaic filter for tessellation.

  • Liquify tool for subtle distortion.


Strength: Immediate and accessible.

Limitation: Static unless animated elsewhere.

Adobe Photoshop would allow me to manually slice the portrait, offset layers, and experiment with fragmentation quickly. It is accessible and familiar at a basic level. However, the mosaic filter and Liquify tool do not produce the type of visual disruption I am seeking. The mosaic effect feels decorative rather than structurally destabilising, and Liquify creates distortion that reads as warped rather than systemically fragmented. The outcome looks manipulated rather than processed.

  • Convert portrait to 3D mesh.

  • Fracture modifier to break face into planes.

  • Animate pieces drifting apart.


Strength: Moves fragmentation into spatial form.

Limitation: More technical setup.


Conceptually, Blender is compelling because it shifts fragmentation from a flat image into a spatial condition. The instability becomes architectural rather than purely graphic. However, the technical setup is significant. Creating and manipulating a 3D mesh requires skills and time that I do not currently have. The process risks becoming about learning software rather than developing the conceptual outcome. While the spatial potential is strong, it is not a practical option for me at this stage.

AI / Machine Learning Systems


  • AI-based video distortion.

  • Style transfer with geometric overlays.

  • Frame-by-frame instability.


Strength: Hybrid AI + motion.

Limitation: Risk of aesthetic trendiness.


The ability to introduce geometric overlays and shifting disruptions aligns with my interest in systems processing identity over time. Runway combines AI and motion in a way that feels accessible without requiring advanced coding. However, there is a risk that the outcome may appear stylistically driven rather than conceptually rigorous. AI distortions can quickly feel trend-based if not carefully framed. Time is also a factor; I may experiment with the free version to test whether the instability it produces aligns with the structural effect I am seeking before committing further.

  • Use image-to-image mode.

  • Prompt geometric fracture or mosaic reinterpretation.


Strength: Experimental unpredictability.

Limitation: Less structural control.


Stable Diffusion could be used to reinterpret my self-portrait through image-to-image prompting. Its unpredictability is conceptually appealing because it introduces instability beyond my direct control. However, the lack of structural precision means the results may not align with the specific geometric fragmentation I am seeking. It would be useful for experimentation.


Physical + Digital Hybrid Approaches


9. Projection Mapping

  • Using MadMapper

  • Project moving tessellations onto a physical sliced print.

  • Digital layer misaligns from physical image.


Strength: Strong installation presence.

Limitation: Equipment required.


MadMapper would allow me to project moving tessellations onto a physical surface. This has strong installation potential and reinforces the tension between structure and instability. However, it requires access to projection equipment, the purchase of expensive software, and time to learn the program. I do not currently have the capacity to invest in that process. To realise it properly, I would likely need to outsource the technical setup, which shifts control and increases cost, making it impractical for this project this semester.


10. 3D Printing Fragmented Planes

  • Print portrait as layered transparent sheets.

  • Offset each layer slightly.


Strength: Spatial dislocation.

Limitation: Production time.


3D printing fragmented planes is a direction I am genuinely interested in pursuing. Printing my self-portrait as layered transparent sheets and offsetting each layer slightly would introduce spatial dislocation rather than flat distortion. The fragmentation would exist in depth, requiring the viewer to move around the work to attempt visual alignment. This aligns strongly with my investigation into instability and partial coherence. This option feels materially and conceptually consistent with my practice. It moves beyond screen-based manipulation into object-based structure. It may also be possible through the 3D workshop Andy is organising at Windale, which makes it more accessible than other high-end digital systems. The limitation is production time and technical preparation. File setup, slicing, and printing require planning and coordination. However, unlike coding-based systems, this feels achievable with guided support. At this stage, this is one of the most viable and conceptually strong options for further testing.

When comparing options, the key question is:


Do I want fragmentation to appear imposed by an external system?

Or visibly constructed by my own hand?


  • Instagram

© 2026 by Melanie Meggs

bottom of page