On the Contingency of Machine Vision as Substitute for Visual Cognition inside 3-D Intermediate Spaces



An Unapt Introduction to Depth Sensing and Spatial Cognition


RGB-D Sensor Mounted on 160cm Vertical Metal Construction with a Circular Plate / Head-Mounted Display with 2 Base Stations
Real-Time Point Cloud Data Generation inside a 3-D Wireframe Model 800cm x 800cm x 600cm running on a Photogrammetric Inducer with the use of a Workstation
Soundscapes are generated by Václav Tvaruzka, http://www.vaclavtvaruzka.com



This work in its studious compendium attempts to substantiate a virtual-spatial environment, which relies on the human visual system and its cognitive components, in particular, the presumably indigent human sensory cognition and/or the mechanisms of the human eye as an optical interface through the biological augmentation of sensory data elucidation. The primary concern of this project has been prefabricated into subsections, contingent on the functions of the apprehension of visual information and the faulty thresholds of the human eye as an optical system which relies on the psycho-visually redundant properties of quantizers, such as the subspace approach in spectral color science, through the digitization of transmitted sensory data. This integral process of spatial cognition and it being devoured constantly into segments by the peripheral nervous system has certain similarities, when one has the impetus to correlate this structure with reference to our occipital lobes, where visual stimuli would have been enumerated later to be unraveled by the retinofugal projection along the optic tract.


The procedure itself reconnoiters the transpiring habitation of head mounted displays (HMD) being attached to depth sensors, as a utility of three-dimensional structural mapping in respect to machine vision and its instrument-environment correspondences in the field of stereoscopic projection where the extent of our interpretation of reality could perchance be transformed into the perspective of an autonomous and possibly deceivingly latent video feedback. This appliance, as a deduction, formulizes the triangulation of stereoscopic image displacement in flat co-planar moving images.


The comparison between human spatial perception regarding vision and the systematic of machine-computer aided visual environments provides the fundamental basis of this project, which intentionally does not emanate from a scientific standpoint but nevertheless I glean the necessity to amalgamate disciplines of formal sciences apropos of geospatial informatics as a methodological tool, strictly speaking, this auxiliary tool will entirely be subsumed within the retrieval span of the 3-D rearrangement of an epipolar geometry (points, polygons, vertices and so on) inside the current exhibition space, Vordere Zollamtstraße 7 - Project Room 201.


“The human visual system constantly adapts to different luminance levels when viewing natural scenes. We present a model of the visual adaptation, which supports displaying the high dynamic range content on the low dynamic range displays. In this solution, an eye tracker captures the location of the observer’s gaze. Temporary adaptation luminance is then determined as the impact of the light area surrounding the gaze point. Finally, the high dynamic range video frame is tone mapped and displayed on the screen in real-time. We use a model of local adaptation, which predicts how the adaptation signal is integrated in the retina, including both time-course and spatial extent of the visual adaptation. The applied tone mapping technique uses a global compression curve, the shape of which is adapted to the local luminance value. This approach mimics a natural phenomenon of the visual adaptation occurring in human eyes.”


The complete text is available at https://phaidra.bibliothek.uni-ak.ac.at/detail_object/o:35108 // The content of the text gets updated frequently in draft form //