November 13, 2011 § Leave a comment
The final blogpost. Here we go!
Process . . .
Odd to think it’s been a semester since the concept of ‘Stomping Grounds‘ grabbed ahold of me. It’s fun to imagine the infinite paths this project could’ve taken, but I’m grateful for the steps taken along the way.
Throughout this course, I have maintained this blog documenting my progress. (Quite a rewarding commitment).
From first flight:
to Strings of Ribbon:
and all the spaces:
and inspirations in between.
And have since been culminating:
to reach a nice casual game.
. a casual immersive game .
This casual immersive game is designed for play in social spaces. The simulations demonstrated here would be set up in an open public area, with one portion (the 3D Kinect view) being displayed as a large immersive (potentially panoramic) projection. Adjacent to this would be a multi-touch table which, like the kinect, can also promote collaborative gameplay.
These platforms can encourage strangers, both passer-bys and veterans, to mutually engage their environment in a casual immersive game.
( Want to poke around at the java source code? Download here. )
DECO2606 Real Time Multimedia is an awesome class. Learnt a lot. One of the few projects I’ve thought could realistically be extended after my degree concludes. Perhaps as a tablet game, or an avenue of research into collaborative interactions in social spaces.
Though the last deco2606 blogpost, this is certainly not the end of this project. After improvement and demos, components and knowledge learnt will be reused extensively for other projects (recursive elements like fractals are looking more and more interesting by the day as my next venture).
Overall a very fun and enjoyable class.
P.S. Also, please check out my other classmates’ sites, there’s some pretty amazing work!
November 13, 2011 § Leave a comment
So, this is to demonstrate that I got UDP working, synchronizing the states of two components of a casual game I’ve been working on.
One would work on a multitouch table, the other, on a large projector screen, with input from a Kinect.
( To any readers: Unfortunately, it doesn’t look like I’ll be able to access the facilities to showcase this app running in their intended environment, so I will make do with simulations for now, and upload one when I get a chance. )
November 12, 2011 § Leave a comment
This is a reflective journal entry and summary of the unit of study; ‘ARCA2606’, archaeology’s “Maps, Time and Visualisation” subject, taught by Ian Johnson and Andrew Wilson. It has been quite a practical course, with a nice mix of theory to support the exercises practiced. I have maintained an online student journal of my progress through this subject for personal documentation.
The class began by looking at an overview of mapping, it’s use and benefits, (of it and visualisation), over time. I learnt about the existence of different characteristics of represented data, from symbology to chosen map projections (‘Getting the hang of ArcGis’).
Over the first few weeks, the fact that “All Maps Lie” became much more apparent (especially through readings such as “Deconstructing the Map”, a paper by J.B. Harley, and a reading of Mark Monmonier’s “Lying with Maps”). A fact that is caused by maps’ subjective and contextual (historical, educational, political, religious, state-based) nature, and the reason why they should be treated as historical documents (‘Mapping the Past’). Likewise, uncertainty and technical errors were also present in historic maps, as can be seen in nine different views of the Munghal Empire in 1605.
Maps are pretty amazing, not only can they reveal stories, they can reveal so much about a culture, the map’s author, intended audience, and the context of the map’s creation (‘Mapping Historical Data’).
The nature of cartography was explored, classifying maps, and learning how to evaluate modern maps, along with their essential parts (such as scale, projection, symbolisation, orientation, key, title, etc.). The learning of such elements were augmented by our own (the student’s) initial drawings of the area in front of Madsen Entrance.
This same knowledge carried onto future excercises, such as maps depicting Irrawang Pottery Site.
Such weekly readings, lessons and activities were nicely supplemented by discussions such as “Why should we use an ‘incorrect’ map.”.
After exploring the core nature of cartography, Geospatial Information Systems (GIS) were examined. This type of software was very versatile, with growing and wide-spread use. It can essentially be sub-divided into three functions: a geo-database for storing, relating and querying geospatial data, a geo-visualisation tool for querying, modifying and showing features in relation to the Earth’s surface, and geo-processing – where new geographic datasets could be derived from existing ones. The importance of themes (layers) were emphasized, along with the difference of raster and vector data types. (‘What is a GIS?’)
Layers in GIS are certainly powerful, aiding in the filtering of data sets to their most important information for a map, so knowledge can be derived from the map. As practiced in a mapping exercise.
Also covered were the many methods and techniques for collecting data for different uses, all with their own strengths and weaknesses. This data could be collected by oneself [LINK] , or obtained externally ( ’Sources of Data’ ).
The various strengths and weakness of using free and open source data was also examined.
Different GIS types were explored (falling under proprietary or free-or-open-source GIS’), and I speculate we will see a rise in their use, but, at the consumer level, in much smaller and portable systems, with very different uses ( ‘Making maps gis types and speculated future’ ).
Historic drawings were also potential sources of data, but must also be viewed as historic texts. For example, paintings may add a house, to the original painting for aesthetic reasons.
Along with the intricacies of historical drawings, we also learnt about various technical terminologies associated with GIS and geo-spatial data. ( ’Map Delivery’ ). Differences in historical drawings were observed in exercises that involved mapping them ( also called rubber-sheeting, or georeferencing) to existing geospatial points. Such mappings could be augmented in Google Earth showing additional values, building types and structures.
Case studies such as the reading: Indigenous Perceptions of Contact at Inthanoona, and other information ( ‘Usefulness of Digital Data Capture techniques during fieldwork’), revealed much on the process of collecting data at a site.
Different websites were also assessed for their presentation of information and interactivity.
Techniques were explored for geo-tagging routes (which I found extremely useful and will definitely re-use in my travels), and rely on GNSS’ . We also handled data from total station surveys, transforming it into contour lines ( Gathering Data Practice’ ).
After looking getting a grounded understanding on GIS, and a tonne of knowledge on the collection of data, ‘Time’ was next on the subject name (‘Maps, Time, and Visualisation’). This was quite an intriguing topic, as it is a type of data that is conveyed abstractly ( ‘Handling time’ ) and consequently has many different models.
The reading on by Mostern & Johnson’s (2008) “From named place to naming event: creating gazetteers for history”, was extremely insightful, and provided a very nice event-based model (having events ordered by time, grouped into chronologies, and ascribe causalities). Heurist also appears to emulate this model.
Visualisation was one of my favourite topics in this course, and one of the reasons I chose it, it was a shame there was only a lesson dedicated to it’s core and conception, but it was a good lesson, and there was a lot of content covered. The history of visualisation was interesting to learn, beginning as early as the 14th century with Nicholas Oresme. The matching of different data types with different visualisations was also extremely beneficial, and something that motivated further exploration ( ‘Importance of Data Graphs’ , ‘Exploring some Aussie Visualisations’ ). And, like clockwork, Charles Minard’s map was mentioned yet again (but it’s importance in representing the most important datasets across multiple dimensions was much better explained than in previous descriptions I have heard). ( ‘Nature of Visualisation’ )
After exploring the core ideals of visualisation, extra dimensions were explored , along with visualisations as simulations, and visualisations in websites, their content, and the user interaction design behind them.
( A visualisation mapping domestic and internationally-born populations )
The management and issues of digital data was addressed, with a large note on recycling and preserving data through their digitisation.
As the course progressed, I began noticing an increasing amount of content related to this area of study, most recently; Rusia’s own GNSS, an article on how to better visualise and mapping social statistics ( ‘Mapping Social Statistics Dots’ ), processes of mapping ( ’To Map or Not to Map’ ) , and beautiful and historic visualisations (Willard Cope Brinston’s Graphic Presentation).
Since this subject began, I have learnt much about maps, and my knowledge of visualisation, and the intricacies of mapping time have been augmented. From lying about maps to the nature of visualisation, from cartography to GIS’ and data collection, this has been a very insightful journey.
November 12, 2011 § Leave a comment
Ok, for the past few weeks, I’ve been trying to get the touch table working with the program again, but havn’t ever had the chance for a proper sit-down to try and figure it out. Just spent three hours with it, and I’ve narrowed down the problem space. I’ve ruled out drivers’ and code fault. I don’t believe it’s the usb connecting the laptop to the touch overlay (as messages are still being delivered, ableit at default null / empty values, even when powered off). I believe it to be the powers’ fault, primarily due to the current scenario being the same ‘with’ and without power. However, this suggestion may be contrary to other stories of people whom have gotten the table working for them in the past week. For now, I will be ignoring the table due to time constraints, though it’d be nice to put the program back on it sometime in the future.
Meanwhile, I have also been working on the kinect-view of the application. The strings of boids seen in the previous post have now been transformed to their 3-dimensional representations, similarly with their ‘shootable’ enemies!
Currently the network runs off the same computer, but should work nicely (perhaps with less elements) over two when required (this has also been tested at smaller cases).
Next on the agenda:
- Cleaning up the code so we can ‘shoot’ enemies properly in 3D!
(Individual identification of entites such as Enemies required).
- Shooting enemies with the Kinect.
- Aesthetic touchups.
November 11, 2011 § 1 Comment
After trialling a few different methods of networking in processing, a UDP library was used .
UDP was used over TCP here as it was faster. It was found that any data set should be ideally encoded to minimize errors. However, errors and missed data still persist in UDP (which is something I will have to adjust for).
Here are some screenshots that resulted from drawing the data received in a program by the original multitouch table program, to nice effect.
( You can see the errors in UDP and hence interpretation of encoded co-ordinate data via the white dots that are stuck at the top of the screen. )
( Showing enemy positions over time as well, getting gobbled up by the boids. )
November 11, 2011 § Leave a comment
So shooting in 3D has always been a problem for me. I have always thought these types of problems (anything to do with transposing between multiple dimensions) had to be approached manually. Luckily for me, processing can translate it’s 3D co-ordinates to screen co-ordinates with screenX and screenY !
It was then a simple matter of detecting objects that were close to the cursor by their relative viewability, and z-depth. As such, direct selection of objects could be obtained, as well as the ability to ‘auto-target’ targets.
November 11, 2011 § 1 Comment
Did a quick technical demo of how I had setup the kinect. (This date was also the first time that I couldn’t figure out how to fix the multitouch table to work.)
Anyway here’s a (clickable) animated gif of the reactions.
It was a simple technical demo, basically allowing users to sync in and ‘catch’ fireflies with their hands.
The demo revealed that the implementation did not scale well for an environment populated by many people (I did do some initial testing, but this was only up to four people and I believed I had caught all the exceptions in Java), however the error is coming from OSCeleton crashing when there’s too many objects to track (producing a “segmentation fault” message).
Oliver previously had this problem and transitioned to SimpleNI as this was more stable. I may do this given time.