At the Edge of Vision – The Story of WebOccult + Deeper-i at Japan IT Week 2025

Ruchir Kakkad

CEO & Co-founder

At the Edge of Vision – The Story of WebOccult + Deeper-i at Japan IT Week 2025

Share

Collaboration creates a story. Some are born from timing, others from shared ambition.

But the partnership between WebOccult and Deeper-i began from something subtler, a mutual belief that intelligence should live closer to the world it serves.

For years, Vision AI has been mastering the art of seeing, while Edge AI has been perfecting the act of doing. When these two disciplines finally converge, something remarkable happens: insight finds immediacy. 

At Japan IT Week 2025, that convergence came to life in a small, living demonstration, a model of a parking lot no bigger than a tabletop, where cars were not only seen but understood in real time.

At first glance, the booth display seemed simple. Toy cars arranged in a miniature parking grid, a compact camera hovering overhead, a screen alive with flickering boxes of color, green for available, red for occupied. Yet beneath this understated setup existed a complete, self-contained ecosystem of intelligence.

The demonstration represented the full cycle of Vision AI meeting Edge AI: from frame capture to inference, from decision to visualization, all without a single trip to the cloud.

It was intelligence happening exactly where the world moved.

The Collaboration

The partnership between WebOccult and Deeper-i is defined by complementarity. WebOccult, with its Gotilo Suite, brings deep expertise in image understanding, the ability to detect, classify, and interpret patterns that form meaning. Deeper-i, through its Tachy Edge AI architecture, contributes the engineering precision that makes those insights possible in real time.

The two systems together represent more than compatibility; they represent philosophy in motion, a shared conviction that clarity should not wait for connectivity, and intelligence should not depend on distance.

In this collaboration, WebOccult’s software learns from vision, while Deeper-i’s hardware learns from motion. The result is a form of intelligence that doesn’t just analyze but responds, instantly, locally, and reliably.

The Demonstration at Japan IT Week 2025

At the co-exhibition booth in Makuhari Messe, the teams showcased a real-time car parking occupancy detection system, designed entirely for edge execution.

The setup integrated several components, each working in balance. At the core was the Tachy BS402 Neural Processing Unit (NPU), Deeper-i’s accelerator dedicated to running deep learning models at the edge. Mounted atop a Tachy Shield (HAT) on a Raspberry Pi 4, it enabled high-speed SPI communication between host and accelerator. The Pi captured live frames from a USB camera, powered by a Pi 5 adapter and connected via Micro HDMI to HDMI to an external monitor for real-time output. The camera, fixed on a stable mount overlooking the parking grid, delivered a continuous feed to the Pi, each frame representing a tiny slice of reality to be read, processed, and visualized.

Visitors could see the system work before their eyes. Cars moved within the model, and the camera, through Gotilo Inspect’s AI pipeline, immediately detected the change. Bounding boxes appeared, confidence scores updated, occupancy states shifted in real time. No buffering, no delay, no network dependency. 

The intelligence lived right there, on the desk, as immediate as a human glance.

The Technical Architecture

The demonstration was more than visual magic; it was a complete Edge AI pipeline compressed into a single table. It showcased how software and hardware communicate when designed with harmony rather than hierarchy.

The backend handled inference and data processing, while the frontend provided visualization, together forming a closed loop of awareness. Each frame captured by the USB camera was first received by the Raspberry Pi, pre-processed, and sent to the Tachy NPU through SPI-based data transfer.

The NPU performed the neural inference, executing a custom YOLOv9-based detection model that had been optimized and compiled into Tachy format for compatibility with the Tachy-RT runtime. Once inference was complete, the processed tensors were transmitted back to the Pi, where Python-based post-processing handled bounding box decoding, class labeling, and confidence scoring.

This intermediate layer, subtle but essential, converted raw predictions into recognizable insight. The post-processed data was then passed through a socket-based communication layer to a Flask-built frontend dashboard, which visualized the results dynamically in a browser.

Each parking slot appeared as a color-coded rectangle, a direct reflection of the system’s perception of the miniature world beneath the lens.

The entire data flow could be summarized as:
Camera – Raspberry Pi – Tachy HAT (via SPI) – Raspberry Pi – Socket Transfer – Flask Frontend UI.

The input resolution of 480×480 pixels balanced visual clarity with computational efficiency, allowing the model to perform consistently on embedded hardware. Every element of the setup, from lighting to frame rate, was calibrated for reliability rather than spectacle.

What visitors experienced was not a simulation but a functional prototype, a distilled version of what an industrial Vision AI system looks like when it runs on its own.

 

Why Edge Intelligence Matters

The choice to perform inference at the edge rather than in the cloud is not merely technical, it’s philosophical. Traditional vision systems rely on distance: cameras send data to remote servers, where it is processed and returned with results. That model introduces delay, bandwidth dependency, and often, privacy concerns. In contrast, Edge AI collapses that distance. Computation occurs directly at the source, on devices capable of learning, deciding, and acting in real time.

In manufacturing, logistics, or mobility, this proximity changes everything. A second saved in processing is a fault prevented, a decision optimized, an error avoided. The collaboration between WebOccult and Deeper-i demonstrates this shift in real form: Gotilo’s interpretive algorithms providing meaning, Deeper-i’s Tachy architecture delivering immediacy. The intelligence does not travel; it stays. It learns the rhythm of its own environment.

This isn’t just efficiency, it’s empathy, designed in silicon. Systems that see and decide where the action occurs begin to feel less like tools and more like participants in the process they monitor.

Designing Systems People Can Trust

All advanced systems, no matter how elegant, are ultimately measured by trust. At the exhibition, visitors didn’t ask how many frames per second the system achieved. They asked if it could be trusted to decide correctly.

That question defines the future of AI adoption more than any technical metric.

By keeping the entire inference loop visible and local, the WebOccult + Deeper-i demonstration offered transparency as much as precision.

Visitors could trace the logic in real time, from camera capture to bounding box display, understanding not just what the system decided, but how. This transparency builds reliability, and reliability becomes trust.

In many ways, Edge AI is not only an architectural improvement, it’s an ethical one. It decentralizes not just data, but responsibility. When systems explain themselves, people believe in them. That’s how technology becomes part of the human workflow instead of sitting above it.

The Broader Impact and Future Direction

The success of this demonstration is not confined to parking occupancy. It represents a blueprint for how Vision AI and Edge AI can collaborate across industries. The same architecture can inspect surfaces in manufacturing, verify shipments in logistics yards, monitor dwell times in ports, or analyze movement in smart cities, all without dependency on cloud infrastructure.

In this future, cameras don’t just see, they understand. Machines don’t just compute, they interpret. Every industrial space, from production floors to distribution hubs, can become an ecosystem of self-reliant intelligence.

The WebOccult + Deeper-i partnership is already extending this concept into new verticals. In manufacturing, the combination of Gotilo Inspect with Tachy Edge AI will support label inspection, defect detection, and process visibility.

In logistics, the same architecture can optimize resource allocation through real-time analytics. Across these domains, the objective remains the same: to make intelligence not louder, but closer; not faster, but truer.

Building Precision That Stays

The demonstration at Japan IT Week 2025 wasn’t simply a collaboration between two companies. It was a rehearsal for a future where intelligence performs where life happens. From the small tabletop model to the embedded pipeline running beneath it, everything reflected one guiding principle: proximity creates clarity.

For WebOccult, this is the natural evolution of Gotilo’s Vision AI. For Deeper-i, it is the next chapter in Edge computing’s maturation. For industries around the world, it is a glimpse of how technology can become truly dependable, not by existing everywhere, but by existing exactly where it’s needed.

At the edge, intelligence doesn’t wait. It acts. And in that instant, vision becomes understanding.

Want a closer look at how the parking demonstration was engineered? Read our detailed breakdown in The Technical Anatomy of a Parking Twin

To explore how Vision AI and Edge AI can transform your industry’s visibility and precision, visit www.weboccult.com or connect with our team to experience the future of inspection!

Ruchir Kakkad
CEO, WebOccult

Tech enthusiast | Co-founder @WebOccult | First coder, strategist, and dreamer of the team | Driven by AI, focused on change | Loving every bit of this journey

Whatsapp Img