AI

Please learn some basic Python and Ubuntu Linux. Please go through the videos to learn how to use Raspberry Pi Pico W or an ESP32 with the different components and how to build a mecanum wheels car before going through these videos.

We are in the process of integrating Nvidia Jetson Orin Nano running ROS2 with Raspberry Pi Pico W and ESP32 running MicroROS to create toy self driving cars to help pupils learn AI and coding on.

We are also working on a Large Language Model that specialises in hardware and software development that could be used to assist pupils while they learn to code.

There are also many free courses on Computer Science, Programming, AI and Machine Learning online that you could go through.

You could also go through huggingface and look at the source code for LeRobot

https://huggingface.co/lerobot

https://github.com/huggingface/lerobot

Huggingface has a lot of open source large language models that you could also try out via LM Studio. Just download LM Studio and try out the different open source LLMs and use  their APIs locally too. You could also install n8n in a docker container or set up a local homelab using Rancher to manage the docker containers and the Kubernete clusters.    

We are also be looking at showing pupils how to fine tune Audio-Vision Language Model for robots to use in the future. To fine tune Machine Learning models you need to build the right infrastructure from the start, so the sooner pupils Learn Linux, Docker, Kubernete clusting, RAG and possibly kafka the better.  

We are also looking at TensorRT 

https://docs.nvidia.com/nemo-framework/user-guide/24.09/deployment/llm/optimized/tensorrt_llm.html

We are also looking at simulation tools for robots to learn on, like Gazebo and Nvidia Isaac Gym.

The Nvidia Jetson Orin Nano is powerful enough to run TensorRT and local Large Language Models to control robots.
There are many examples of robots that use local large language models running on a Nvidia Jetson Orin Nano that could control the car. You could look at how these were built.
You could also install LM Studio and try out different local large language models on your PC like DeepSeek running locally. For robot simulation with ROS2 you could use Gazebo or Nvidia Isaac SIM.
We have been testing HiWonder TurboPi Cars. The hardware is good but they need to improve the instructions to make it easier for people to follow.
There are lots of different designs out there that you could use.
A lot of the designs like this one use cheaper plastic gear motors, but children try to turn the wheel manually and break them. So you could improve the design of your robots by using metal gear motors and Nvidia Jetson Orin Nano for more Processing power.
This video shows how to use AI in your organisation and customise Large Language Models with specific information and expose them via Open WEB UI (for internal use only, as it is not secure to expose this to the internet). It shows how to Integrate it with other data source via an MCP Servers.
If you want to get into AI Agent development the best place to start is to download and install the latest Nvidia graphics card drivers, if you have a Nvidia Graphics card in your computer and then download LM Studio https://lmstudio.ai/ Start running LLM Models locally and start playing around with it. Only download the smallest models and make sure your computer has good airflow and no obstructions infront of the fans or else your CPU and GPU will over heat. You could use it for RAG i.e. uploading your companies documents into LM Studio and ask the local AI models questions about your documents or MCP where you connent it to other companies MCP servers and let the LLM access their data too.
The next step in to enable WSL2 (linux mode) on your PC and run N8N locally. N8N has the langchain AI Agent modules built into it, a nice graphical user interface with lots of connectors that will allow you to connect it to many other apps and exchange data. It has a nice user interface to allow you to connect it to different apps and build workflows to automate your tasks. You could also extend it with LangGraph for more advanced AI Agent development. Then you could install Visual Studio Code and the required plugins to start developing more advance AI apps. To start off with use docker containers and later move to the technologies below for production.
Companies like deepseek and openAI use the following technologies in production: Kubernetes and Ray for hyperscaling (KubeRay). VLLM for distributed training of large language models across multiple nvidia gpus rather than LM Studio. Prometheus and grafana for monitoring. Postgres database for RAG. You could use these to create a technology stack that could then be reused for different types of AI companies. The underlying technology is the same. Just the training data is different for different types of industries.
Skip to content