London Meetup Summary: Insights on Control Systems, LLMs, and AI Tools
At a recent London meetup, Serge Kozlov (Conundrum) explored deploying optimal control systems in factories. Using dynamic controllers with model feedback, he highlighted balancing material flow to prevent overload while maximizing yield. Model predictive control, trained via system identification, enables real-time parameter adjustments. The solution, requiring low-latency on-prem deployment, uses Kafka, ClickHouse, Kubernetes, and Python SDKs for 24/7 monitoring—proven effective in client projects.
Victor Zommers demonstrated visualizing LLM embeddings with T-SNE (Flask, Three.js, MongoDB), focusing on keeping LLMs relevant post-training. His pipeline integrates real-time macroeconomic data for traders, ensuring timely insights in fast-paced markets.
Lightning Talks:
A blend of cutting-edge tech and practical solutions—ideal for developers and engineers optimizing industrial systems, LLMs, or workflows!
Spotting AMD Early: Our AI Innovation at NHS Hack Day
Age-related macular degeneration (AMD) affects 190 million globally, driving urgent need for scalable solutions. At NHS Hack Day, our team presented Deep Ocular AI—a tool leveraging a UNet-based architecture to analyze OCT retinal scans and tabular data, improving AMD detection and segmentation.
Current diagnostics rely on subjective, time-consuming manual analysis. Our model addresses this by combining multimodal inputs (cross-vendor images + clinical data) to enhance generalization, tackling inconsistent image quality and disease progression variability.
Crucially, we engaged clinicians to address real-world adoption barriers. Their feedback shaped our design, ensuring practicality and alignment with workflows—a key step for AI integration into routine care.
By automating early-stage AMD classification, we aim to reduce costly delays (e.g., treatments like Aflibercept at ~$2,000/dose) and preserve vision through timely intervention.
Huge thanks to teammates Mark, Rohit, and Ann, and Moorfields Eye Hospital for collaboration. Watch our prototype demo here (Due to health reasons I didn't attend the demo)!
MLOps London Meetup Insights: LLMs, AGI, and Serving Strategies
The latest MLOps London Meetup explored cutting-edge AI advancements. Dr. Jodie Burchell opened with "The IQ of AI", linking Claude Shannon's logic gates to GPT-4's AGI potential. She highlighted Francois Chollet's ARC benchmark for measuring skill acquisition and urged caution around biases in AGI systems.
Chris Samiullah unpacked open-source LLM development, tracing progress from GPT-3.5 to locally deployable models via quantization and GPU optimization. He championed RAG for accuracy and introduced DeepEval for LLM assessment, aligning with Andrej Karpathy's "limitless scaling" vision.
Ramon Perez closed with model serving strategies, contrasting Batch, Online, and Streaming approaches. He outlined deployment paradigms—embedded models (edge devices), Model-as-a-Service (collaboration-friendly), and niche Model-as-Data systems—each balancing trade-offs in scalability and maintenance.
Together, these talks underscored MLOps' rapid evolution, blending theoretical rigor (ARC, AGI ethics) with pragmatic tools (RAG, quantization) to shape AI's future.
PyData London 80th Meetup: RAG, Dependencies, and OS Debates
Victor Naroditskiy's talk on AI-driven enterprise search highlighted replacing outdated keyword methods with semantic querying. By encoding text into embeddings, Retrieval Augmented Generation (RAG) bridges queries to siloed data (Jira, Slack, etc.) while linking answers to sources—unlike ChatGPT. Key insights: prioritize fine-tuning embeddings over full LLMs for cost efficiency, and blend semantic search with filters (dates/keywords) for hybrid robustness.
A lively Python dependency panel debated tools like pip, conda, poetry, and rye. Experts weighed reproducibility, isolation, and ecosystem fit—a reminder that no "perfect" solution exists, but context rules.
Lightning Talk: Casper Da Costa-Luis shared his switch to Windows for ML workflows via WSL2, praising seamless Linux tool integration (Python stacks, CLI) within Windows' UI. Personal note: While WSL2 bridges gaps, I'll stick with native Linux for its flexibility and terminal-first ethos!
From semantic search to dependency chaos, I enjoyed the first talk most though RAG seems to be the new shiny thing!
At the PyData London Meetup, two talks stood out for their practical insights into deploying data solutions effectively.
1. From Jupyter to Web Apps with Taipy
Marine Gosselin and Florian Jacta demoed Taipy, an open-source platform transforming Python data workflows into full-stack web apps. Designed to bypass JavaScript hurdles, Taipy integrates frontend/backend infrastructure, Plotly visualizations, and collaboration tools—directly from Jupyter or IDEs. While its rapid prototyping impressed me, I noted the absence of experiment tracking, a critical gap for iterative data science. Still, Taipy's vision to turn models into interactive apps (not just static reports) could democratize data product delivery.
2. Michael Natusch's 10 Rules to Not Fail at ML
Michael's talk resonated deeply: many teams know these rules but overlook them. Highlights:
His emphasis on ethics, governance, and user-centric impact was a reminder that ML success hinges on process, not just algorithms.
Takeaways
Taipy excites but needs maturity (hello, experiment tracking!). Michael's rules? Obvious yet underappreciated—a blueprint for ML done right.
From Forecasts to Chatbots: Event Takeaways
Leonidas Tsaprounis dissected forecast evaluation, contrasting point metrics (RMSE, MAE) with distributional methods (log score, CRPS). While point forecasts target mean/median accuracy, distributional models offer probabilistic ranges—key for inventory planning. A critical insight: near the median, CRPS and log scores conflict, underscoring trade-offs in uncertainty modeling.
John Sandall's private ChatGPT demo—built with Streamlit, LangChain, and Vicuna-13B—showcased accessible AI tooling. Using llama.cpp, he optimized the LLaMA-finetuned model to run smoothly on a Macbook, proving lightweight LLM deployment is achievable.
In lightning talks:
Personal reflection: While tools like Streamlit democratize AI apps, Casper's talk reminded me that open-source's legal ambiguities demand proactive governance—a nuance often overlooked in tech's rush to innovate.
How do you balance practicality with ethical rigor in your projects?
Sefik Ilkin Serengil (Vorboss) unpacked billion-scale facial recognition using FaceNet's deep CNNs to generate face embeddings. By measuring Euclidean distances between vectors, the model distinguishes identities. For speed at scale, Approximate Nearest Neighbors (ANNs)—leveraged by Spotify (ANNOY) and Meta (FAISS)—trade slight accuracy for efficiency, ideal for real-time applications.
Pavel Katunin and Anton Nikolaev showcased ML-driven stem cell research, automating neuron conversion via computer vision. Their open-source robotic system tracks cell differentiation, using focus measures refined by ML-trained sharpness models. A robust backend (S3 storage, metadata management, node orchestration) ensures reproducibility and scalability—key for high-throughput experiments.
In a lightning talk, Besart Shyti (Meta) shared his transition from software to ML engineering, advocating for hands-on projects and pair programming over passive learning. His mantra: "Build from scratch, iterate fast, and embrace collaboration."
Takeaway: From facial recognition to bio-optimization, PyData highlighted tools balancing scale and precision—while Besart's journey reminded us that growth often lies beyond traditional paths.