I've been learning about the possibilities of cognitive awareness in synthetic intelligence, aiming to understand the pathway of development of an AI that is capable of architectural design and enhanced built environment based on user-centricity. This journey represents my attempt to take a step forward in integrating design cognition across both 2D and 3D, as well as cyber and physical environments. Using the Java-enabled language "Processing," I've been learning to create responsive programs based on cognitive agents such as sound, light, temperature and more—crucial elements that influence architectural design qualities, all in real time.
Stage 1: Sound
With a personal computer setup, perhaps sound is the easiest form of cognitive thing you can experiment on. So on the first stage, I developed a piece of code that makes 2D shapes react to microphone sounds, changing their physical properties like color and motion physics based on the intensity of the sound.
The code base for this can be found at: This GitHub Repository
When discussing cyber environments, achieving this level of interaction is quite feasible. Using Processing, I developed a unique version of Astrosmash. The twist is that you need to avoid asteroids that are generated in real-time based on sound from the physical environment!
Want to play the game?
If you are on a windows PC, you can play the game by pressing the download button below:
Stage 2: Vision:
Since the COVID-19 pandemic, video cameras have become commonplace in homes, creating an opportunity for unified computer vision to revolutionize how we understand human interaction with built environments. While privacy concerns prevent widespread use, in a controlled research setting, this technology can offer real-time insights into movement patterns and heatmaps, providing valuable data for design and architecture. By contributing to this open-source project, you're helping explore the potential of computer vision to reshape how we analyze spaces and user behavior, all while ensuring it remains efficient and accessible.
With this idea in mind, i have been conceptualizing and looking into ways of developing a project called "Human Movement Tracker with Real-Time Visualization"
This project is a Python-based application that integrates with a camera to detect room boundaries and track human movement in real time. Using libraries like OpenCV for video processing and MediaPipe for human pose estimation, the program identifies walls, floor, and ceiling from a video feed, generates a 2D floor plan, and maps occupant movements onto the plan. The user can visualize movement paths in real time, with options to export data for analysis. The application also includes a simple graphical interface for starting/stopping the camera feed, adjusting settings, and saving movement data.
This is still a program under construction and part of an ongoing research initiative. But I am opening it up to the community for feedback and collaboration. The goal is to enhance real-time human movement tracking using computer vision, and by making the code open source, we invite developers to contribute to optimizing and refining it. Your insights can help make this tool faster, smarter, and more efficient. Join us in pushing the boundaries of innovation!
The code base for this can be found at: This GitHub Repository
This project is just the beginning, and the code will keep evolving as part of ongoing research!
I'm excited to receive feedback to make it even better, and I'm eager to learn from others working on innovative projects in cognitively responsive environment design. Let’s collaborate and push the boundaries of how we interact with and understand our built spaces!