<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:yandex="http://news.yandex.ru" xmlns:turbo="http://turbo.yandex.ru" xmlns:media="http://search.yahoo.com/mrss/">
  <channel>
    <title>Careers</title>
    <link>https://www.aliensense.com</link>
    <description>Join the team of like-minded nerds defining the interface between AI and the physical world.</description>
    <language>ru</language>
    <lastBuildDate>Tue, 10 Mar 2026 21:44:11 +0300</lastBuildDate>
    <item turbo="true">
      <title>Embedded Software Engineer, AI Accelerators</title>
      <link>https://www.aliensense.com/jobs-at-aliensense/lesielad41-embedded-software-engineer-ai-accelerato</link>
      <amplink>https://www.aliensense.com/jobs-at-aliensense/lesielad41-embedded-software-engineer-ai-accelerato?amp=true</amplink>
      <pubDate>Wed, 04 Mar 2026 00:24:00 +0300</pubDate>
      <author>Abu Dhabi, UAE</author>
      <category>AI</category>
      <category>Embedded</category>
      <category>🇦🇪</category>
      <description>Join Aliensense to bring edge AI to life on MLSoC and NVIDIA Jetson.</description>
      <turbo:content><![CDATA[<header><h1>Embedded Software Engineer, AI Accelerators</h1></header><h2  class="t-redactor__h2">Embedded Software Engineer, AI Accelerators </h2><blockquote class="t-redactor__preface">Aliensense (Masdar City, UAE)</blockquote><div class="t-redactor__text">Aliensense is the physical AI company. We build a modular compute and sensor platform that gives robots the perception, reasoning, and real-time control they need to operate in the physical world. Our hardware stacks NVIDIA tech with custom GMSL camera modules, CAN-FD buses, and a dedicated AI accelerator tier. We are based in Masdar City and backed by deep-tech investors across the GCC and Europe.</div><h3  class="t-redactor__h3">The Role</h3><div class="t-redactor__text">We are hiring an Embedded Software Engineer to own the AI inference layer on our robotic modular platform. You will integrate AI-accelerators (and optionally Jetson's DLA/GPU) into our perception stack, taking trained models from ML engineers and making them run fast, deterministically, and with bounded latency on our embedded platform.<br /><br />This is a hands-on, end-to-end role: from flashing firmware and writing low-level runtime code to profiling inference on real hardware and handing off tested pipelines to robotics engineers.</div><h3  class="t-redactor__h3">What You will Do</h3><div class="t-redactor__text">- Own the model deployment pipeline: ONNX / TFLite / PyTorch export → Accelerator SDK / Palette compiler → runtime integration on robotic platform<br />- Write and maintain C/C++ inference wrappers and camera-to-accelerator data paths with deterministic latency budgets<br />- Profile and optimise models for the MLSoC AI-accelerator: layer fusion, quantization (INT8/FP16), memory layout, bandwidth bottlenecks<br />- Integrate accelerator outputs (detection, depth, segmentation, SLAM features) into our ROS 2 perception stack<br />- Collaborate with the firmware team to ensure camera triggers, GMSL frame timestamps, and inference timestamps are tightly correlated<br />- Maintain the inference runtime as a versioned, reproducible component of platform's software stack</div><h3  class="t-redactor__h3">Requirements</h3><div class="t-redactor__text">- 3+ years of embedded or edge ML engineering experience<br />- Hands-on experience deploying models to a dedicated AI accelerator (SiMa.ai, Hailo, Coral, Kneron, Horizon, or similar MLSoC — not just GPU/CPU)<br />- Strong C/C++ (modern C++17/20); Python for tooling and model export<br />- Understanding of quantisation-aware training, post-training quantisation, and the performance trade-offs<br />- Familiar with ONNX, TFLite, or equivalent model interchange formats<br />- Experience reading and interpreting profiler output (bandwidth, compute, memory) on constrained hardware</div><h3  class="t-redactor__h3">Nice to Have</h3><div class="t-redactor__text">- Familiarity with NVIDIA Jetson (DLA, CUDA, TensorRT, Deepstream)<br />- GStreamer / V4L2 camera pipeline experience<br />- ROS 2 node development<br />- Background in SLAM, visual odometry, or 3D perception</div><h3  class="t-redactor__h3">What We Offer</h3><div class="t-redactor__text">- Hands-on work with cutting-edge edge AI silicon<br />- End-to-end ownership — from silicon bringup to live robotics demos<br />- Small, senior team with no bureaucracy<br />- Competitive compensation<br />- Masdar City HQ — the UAE's deep-tech hub</div><h2  class="t-redactor__h2">Apply</h2><div class="t-redactor__text"><a href="mailto: careers@aliensense.com">careers@aliensense.com</a> · Subject: `AI Accelerators Engineer`</div>]]></turbo:content>
    </item>
    <item turbo="true">
      <title>Embedded Software Engineer, Computer Vision &amp;amp; Perception</title>
      <link>https://www.aliensense.com/jobs-at-aliensense/hsesy5lsb1-embedded-software-engineer-computer-visi</link>
      <amplink>https://www.aliensense.com/jobs-at-aliensense/hsesy5lsb1-embedded-software-engineer-computer-visi?amp=true</amplink>
      <pubDate>Mon, 09 Mar 2026 23:05:00 +0300</pubDate>
      <author>Abu Dhabi, UAE</author>
      <category>CV</category>
      <category>Embedded</category>
      <category>🇦🇪</category>
      <description>Build real-time multi-camera perception for autonomous robots.</description>
      <turbo:content><![CDATA[<header><h1>Embedded Software Engineer, Computer Vision &amp; Perception</h1></header><h2  class="t-redactor__h2">Embedded Software Engineer, Computer Vision</h2><blockquote class="t-redactor__preface">Aliensense (Masdar City, UAE)</blockquote><div class="t-redactor__text">Aliensense is the physical AI company. We build a modular compute and sensor platform that gives robots the perception, reasoning, and real-time control they need to operate in the physical world. Our hardware stacks NVIDIA tech with custom GMSL camera modules, CAN-FD buses, and a dedicated AI accelerator tier. We are based in Masdar City and backed by deep-tech investors across the GCC and Europe.</div><h3  class="t-redactor__h3">The Role</h3><div class="t-redactor__text">We are hiring an Embedded Software Engineer with a computer vision focus to own the camera and perception pipeline on our platform: from raw GMSL frames off the MAX96792A deserialiser all the way to calibrated, synchronised stereo streams powering Isaac ROS Visual SLAM.<br /><br />You will work across the full stack: camera bring-up, ISP tuning, multi-camera synchronisation, stereo rectification, and integration with our ROS 2 perception nodes. You will ship code that runs in real time on real robots.</div><h3  class="t-redactor__h3">What You will Do</h3><div class="t-redactor__text">- Bring up and maintain GMSL2/3 camera pipelines on Jetson Orin NX/Nano (MAX96792A deserialiser, FRAMOS FSM sensors, MIPI CSI-2)<br />- Develop and tune GStreamer / Argus / V4L2 pipelines for multi-camera capture with hardware-synchronised triggers<br />- Implement and maintain stereo camera calibration (intrinsics, distortion, stereo extrinsics) and live rectification<br />- Integrate camera streams with IMU data for time-consistent, timestamp-accurate sensor fusion inputs<br />- Deploy and tune Isaac ROS Visual SLAM on stereo + IMU inputs; optimise for stable continuous operation<br />- Profile and resolve latency, jitter, and throughput issues across the camera → deserialiser → CSI → ISP → ROS pipeline<br />- Contribute to the sensor configuration system</div><h3  class="t-redactor__h3">Requirements</h3><div class="t-redactor__text">- 3+ years of computer vision or embedded vision engineering<br />- Strong C++ ; comfortable with Python for calibration tooling and scripting<br />- Experience with GStreamer, V4L2, or NVIDIA Argus/libargus for camera pipelines<br />- Working knowledge of camera calibration (OpenCV, ROS `camera_calibration`, or equivalent)<br />- Stereo vision fundamentals: epipolar geometry, rectification, disparity<br />- Experience with ROS 2 (publishers, synchronisation, TF, launch files)</div><h3  class="t-redactor__h3">Nice to Have</h3><div class="t-redactor__text">- GMSL / FPD-Link / MIPI CSI-2 camera bring-up experience<br />- NVIDIA Jetson Orin platform knowledge (JetPack, BSP, JTOP)<br />- Isaac ROS or Jetson-specific perception accelerators (VPI, cuVSLAM)<br />- IMU driver integration and IMU-camera time synchronisation<br />- Familiarity with SLAM or visual odometry systems (ORB-SLAM, VINS, OpenVINS, cuVSLAM)<br />- Hardware-level camera synchronisation (PWM trigger, PPS, GMSL GPIO)</div><h3  class="t-redactor__h3">What We Offer</h3><div class="t-redactor__text">- Own the perception stack of a real physical-AI product from day one<br />- Work with GMSL systems, Jetson Orin, and Isaac ROS on actual robots<br />- Close collaboration with HW, firmware, and AI teams<br />- Competitive compensation<br />- Masdar City HQ, UAE</div><h2  class="t-redactor__h2">Apply</h2><div class="t-redactor__text"><a href="mailto: careers@aliensense.com">careers@aliensense.com</a> · Subject: `CV Engineer`</div>]]></turbo:content>
    </item>
  </channel>
</rss>
