chady karlitch
Software & multimedia
Contact


TOP ↑
  • poster

    Phoenicia in AR

    Augmented portals on sidewalks

    The project is: A Distributed, Interactive, Digital, Public Installation planned for Beirut.

    Physical "window-like" panels are placed at specific spots on the sidewalks of Beirut’s downtown, giving pedestrian access to interactively visualize and explore Phoenician era scenes, using Augmented Reality.

    Demo video, shot in Sioufi Garden:
    YTwVT4JxqRkKoLoad video
    The demo’s virtual scene is not representative of the final art created for the project.
    Experience A outline

    4 scenes are distributed around, seen through their physical markers.

    Markers Markers types, placement and orientation affects the augmented scene.

    The scenes are as follow:
    1. A Canaanite bedroom/house. From a window’s vantage view.
    2. A village’s water source (ain). From a crevice in the rock’s vantage point.
    3. Temple’s pavilion, with a few animated children. From the pavilion’s ceiling vantage point.
    4. To be determined during development.

    Additional details

    Once an exploration of an area is done, its scene is locally stored can be 'searched' for hidden artifacts, some of which are coupon codes for nearby opt-in shops.

    Markers 'Theoretical' opt-in shops.

    Photo captures of AR interaction and local exploration are made easy to perform and share from the app.

    Exploration of all 4 locations reveals additional artifacts.


    Item

    Social media

    Interesting subjects in this project:
    • GLSL programming (Window masking, scene blending, etc.).
    • Perceptual correctness and visual coherence of mixed digital and physical.
      Interesting articles on that:
      - William Steptoe, Presence and Discernibility in Conventional and Non-Photorealistic Immersive Augmented Reality (link)
      - David Surman, CGI Animation: Pseudorealism, Perception and Possible Worlds (link)
    • A public installations is planned and executed with the intention of being staged in the physical public domain, and being freely accessible to all.

    Untackled subjects that could be interesting:
    • Collaborative shaping/building/terraforming of a scene.

    To be completed.

  • poster

    Sketches of The Obsessive Drafter

    Facial capture and generative sketches

    The Obsessive Drafter is a large autonomous robot arm that interpretively draws portraits of the people that approach him. Initiated and led by Guillaume Credoz who created the mechanical structure, and completed by collaborations with Nareg Karaoghlanian for mechatronics including mobility program, and myself for the sketch generation process.
    Saint-Étienne Design Biennial 2017
    March 9 until April 9 2017, Building H - Cité du design: TOD was exhibited as part of "Si automatique?" . During the month-long performance, it was to fill a 6 by 3 meters wall with sketched portraits of visitors…
    StEt Panorama Photo: Panorama of 'Cité du design'
    Sketch generation process
    Using C# (visual studio 2013) and EMGU, an OpenCV wrapper.
    The software will automatically capture the faces of visitors, through a webcam attached at TOD's wrist, then convert the image into a short sketch for feeding to the machine.

    1) Computer Vision
    I didn’t train my own sets, but used the default OpenCV eyes and faces haar cascade

    YTD44EPcD5yngLoad video

    OpenCV Face Detection Visualized - Adam Harvey

    definitions that comes with the installation: A basic usage of those definitions alone returned a lot of false-positives .

    Some filters were created:
    • Min & max dimensions
    • Mobility range check
    • Growth and shrinkage limits
    • Capture time
    • Face tracking
    Here’s a capture testing session, against a 46 min documentary :
    YTdJwODqj47AYLoad video
    A face being tracked has a score that is updated each frame.

    This score is then used while pooling for a portrait to sketch, including a final additional time filter: As the performance last a month, we initially intended for some of the captured visitors to still be around when their portraits are being sketched on the wall.


    A sketch

    Event captures
    2) Sketch generation

    The sketch routine was suggested to be a continuous circling line with variable diameter depending on the brightness of the underlying area.

    The steering behavior of the pen tip was to be defined.

    A quick prototype was created in AS3:
    A wandering particle that is attracted by the color proximity of neighboring areas, in turn this particle influence the underlying area by progressively fading it out of existence.

    Some parameters returned a cool (inefficient) contour tracer.

    Gallery of results from initial prototype, using automated parameters space exploration:
    • Orbital wanderer
      New creatures
    • Orbital wanderer
      Parameters exploration
    • Orbital wanderer
      Parameters exploration

    Orbital wanderer

    Orbital wanderer
    Another approach
    A couple of different trials later, I decided to stop playing and define specific rules: Normalize and filter the capture image, create a dithered version, order the dither points to generate a path, detailing the picture along it.

    Image filtering:
    1. Noise removal
    2. Masking against a pre-generated ‘face focus’ map.
    3. Image histogram equalization, using CLAHE .
    4. Minimizing large black patches
    Edges are also extracted, using Hough transform, and were intended to add additional detailing around the center of the face: eyes, nose, mouth. But were not ultimately used.

    Dither map:
    1. Used Floyd–Steinberg dithering.
    2. Probability of dither points based on distance from a central circle (fading out).

    Filters

    Ordering:
    The line would have as many breaks as needed, otherwise it’s a traveling salesman problem!
    Trial 1, minimum spanning tree:
    Delaunay triangulation used to compute the euclidean minimum spanning tree of the dither points:
    EMST
    Trial 2, (simpler is better) weighted sort:

    Weighted by distance (between vertices), hue, brightness and proximity to center. This will tend to build up the face in quasi-sequence by facial feature.

    Each path node is encircled once, and the path can break when distances are above a threshold.

    Below is one the final results, an unavoidable sketching the lovely Lena Soderberg:

    YTowXyy5guzAsLoad video
    Wall test

    Wall filling & scenario:
    A completed sketch is a normalized turtle graphic. It’s transformed to fit the next available cell on wall according to a running scenario, which data are written to disk.

    The entire process runs on a windows laptop, the drawing commands get sent to the Arduino via serial communication.
    Wall
    Machine execution
    The wall at the end of the month:
    End of month wall
    Photo by Guillaume Credoz.

    Related

    Another project featuring TOD's avatar: https://chadiik-com-caa17.web.app/?article=PublicCanvas

  • poster

    Little Beirut Bar

    Cocktails, voronoi diagrams and MDF

    Bar design

    Large movable furniture: Made of 4 locking pieces,

    Pieces

    2 of which are interchangeable, without breaking the Voronoi diagram that makes the top:

    Pieces

    The bartop's composition was made using a small script in AS3, that generated the voronoi diagram as I manually distributed cell points over a guiding layer:

    Voronoi

    The areas where the wing pieces meet with the front piece had identical points distributions so they could be interchanged with matching seams.

    Voronoi

    The main design is executed by skilled craftsman.

    The top is a 1cm thick MDF board, CNC cut and engraved the diagram.

    Bartop

    Guide

    The bar has everything a professional bartender wishes for: lit inner working area and electric outlets, speedtrays and shelves, display shelves, ice sink, 30L clean water reservoir. The bartop stands at 110 cm high:

    Interior

    .

    .

    .

    Studio shot

    Little Beirut is a professional cocktail bar catering service. I co-founded it and designed its 1rst bar: Website



  • poster

    Cool down buddy

    Experiments & interactions in AR

    A digital puppet and a live user interact with some props:
    YTM3S-zcX843ULoad video

    The 3D was created and animated in 3ds Max. The development was made in Unity3D and uses the Vuforia AR engine.

    I wanted to use casual items as markers: Since credit cards where highly glossy, I used 3 playing cards. A wobbly outline vertex shader was applied to the assets to drown down the effects of small registration errors that are inherent in all current AR engines.

    Outline shaders

    The interactions are based on distance and some few other data, nothing complex in comparison with a dance routine of 2 characters for example, which was my initial test idea.

    Animation states tree


  • poster

    Public Canvas

    Shared drawing board

    This is an online multi-user experiment. Anyone who accessed it got a small patch on a shared canvas to sketch a drawing that is submitted for a digital avatar to execute.

    Public canvas, January-June 2017
    YTKIs2tOtuImQLoad video
    The red spheres account for the number of sketches submitted.
    Internally called "Little TOD"
    This was a quick warm up exercise preceding my contribution to The Obsessive Drafter project. It is created in Unity3D and uses Firebase. The 3D model used is the model of the actual machine designed by Ghouyoum.

  • poster

    Hidden lithophane

    Pop 3D Printing, unsintered PA12

    Original piece Banksy – Tropicana, Weston-super-Mare, England

    This seemed like a suitable target for a derivative work with a Hidden lithophane inserted in:
    YTwd2aps189EkLoad video

    3D modeling was done in 3ds Max and Mudbox.
    Boy modeling

    And the showering lady’s geometry were created by a small utility app I had written:
    HL Javascript

    Make your own HL here:
    Recommended 3D printing service: Rapidmanufactory - (Do not repair 3D model)

    It displaces a suitable mesh according to the brightness of the image, encloses it in a cube and generates a printable *.stl file. Unsintered powder layers inside the geometry will create the lithophane effect - Tested in SLS 3D printing only.
    The 3D print was completed by Rapidmanufactory in a slightly translucent nylon.
    Original piece

    And here is a meta HL, just because:
    Original piece

  • poster

    Connectors 101

    Procedural 3D printable connectors in MAXScript

    Once, during my time at Rapidmanufactory, I was requested to write a script that would create 3D printable connectors at the vertices of a mesh:
    YTkweDVEtJsSgLoad video

    It would be used in the process of creating a couple large cylindrical lamps for an exhibition, the Omnilux lamps by Ghouyoum, who also defined the original connector model for imitation.

    Omnilux connector photo Closeup on a brass connector (Generation > 3D print > Silicone mold > Wax cast > Investment cast) (Omnilux lamp in the background)

    Other 3D models:
    Other uses
    YT2XRRW7V0nQMLoad video
  • poster

    Papier

    Characters and papercraft

    Papier, is part of a workshop context on digitally editing and creating a 3D character, unfolding it for printing on thick paper, which is cut and glued to get a 3D cardboard copy.

    Markers

    The workshop was thought of by my brother, who also designed the 3D art.

    The 3D assets were conveniently named and provided via fbx. They are vertex colored: easier during creation and more efficient for storage. The tessellation of the surface define the user’s in-app drawing resolution.

    The 3D assets were imported into 3ds Max and through maxscript: texture unwrapped and exported into a unified 3D mesh, color and category/type file that can be easily parsed in Unity using C# to create efficient unity assets automatically loaded with their correct mapping structure and data members used by the browsing system.

    Browsing

    The assets preparation and integration was mostly streamlined, and now new content can be easily added or modified.

    The shading uses an edited Unity flat shader, and geometry is vertex colored, until 'exporting' when a texture map and a Wavefront obj file are created and saved.

    The generated files are loaded into Pepakura, who unfolds the geometry beautifully, mark folding lines and gluing patches, and distributes the colored parts on a flat surface ready for printing…

    All assets

    1 - Editor

    YTo_CHJ1XVkk4Load video

    2 - Assembly

    YTTDbPrIE98XYLoad video

  • poster

    Once Upon a Roof

    Urban fighters in Stage3D

    A game prototype created in 2013, with Fadi Karlitch, who’ve also done the art and animation. It features 2 sword fighters engaging in 2 urban environments.

    First roof
    Built in vanilla AS3 and AGAL:
    • Automatic sword info extraction from sprites
    • Scene editor (All geometries are quad sprites in 3D space - Billboards)
    • Animation controllers
    • Graphics vertex and fragment shaders (sprites, animated spritesheets, post-processing)
    • Automated batch rendering
    • Simple AI
    • Gestures controls
    • Accelerometer enabled game camera (on mobile)
    3D models 3D models created by Fadi.
    Sword extraction Sword extraction
    Game editor Scene editor

    DEMO - AI vs AI, 2 fights:
    YTH63eR2fGI5MLoad video
  • poster

    Video Demo Reel

    Video/Interactive/Showcase

    An Everything Demo Reel

    y_C57JSi2zILoad video

    Check other video showcases on my YT channel.
Landscape mode only