Rankine Innovation Lab equips learners, professionals, and organisations to design and deploy AI-enabled solutions for circular systems, climate resilience, and smart agriculture—responsibly and practically.
Human-centred innovation aligned with global responsible AI guidance and sustainability practice
Aligned with UNESCO AI Ethics · Ellen MacArthur Foundation · FAO Climate-Smart Agriculture
Founded by active PhD researchers at PolyU & Arizona State University
We work at the intersection of artificial intelligence, environmental science, and agricultural innovation to build practical, deployable capability.
Applied machine learning and data methods for scientific and engineering problems—reproducible, deployable, and human-centred by design.
Learn more →Design systems that eliminate waste, keep materials in use, and regenerate nature—grounded in circular economy principles.
Learn more →Precision and climate-smart approaches that improve yields, reduce inputs, and build food systems resilience—locally grounded and field-ready.
Learn more →A repeatable, four-stage process for turning real-world problems into deployed, governed solutions. Used in every programme, every partnership, every prototype.
Instrument the system—gather data, map field conditions, and understand user constraints before building anything.
Apply AI and analytics appropriately—from lightweight statistical approaches to advanced machine learning, matched to the problem.
Validate with pilots, responsible AI evaluation, and evidence checks. Nothing scales without proof it works.
Package as training, toolkits, or deployable services—with governance and handover built in from the start.
Practical pathways into AI and sustainability work—with portfolio-grade artefacts you can show, not just certificates you hold.
Upskilling that produces reproducible workflows, deployment-ready thinking, and real problem-solving capability.
Capability building, prototypes, toolkits, and deployment support—aligned to your specific context and constraints.
Our Knowledge Hub publishes playbooks, explainers, templates, and lab notes designed to be immediately deployable—not just informative.
Triple-objective framing—productivity, adaptation, and emissions—with local acceptability at the centre. A practical entry point for decision-makers.
Data streams, sensors, AI tools, and resource efficiency at smallholder scale—from concept to field-ready checklist.
Evaluation prompts and governance questions aligned with UNESCO AI ethics guidance—for use before deploying any AI solution.
We scope, prototype, and deploy with organisations—responsibly and practically.
We work at the intersection of AI, environmental science, and agricultural innovation—building practical capability so solutions move beyond pilots and into responsible scaling.
Rankine Innovation Lab is not "one thing." It is a coherent portfolio of services with three primary studios, each mapped to clear outcomes and customer groups, plus a cross-disciplinary collaboration layer that ties them together.
Applied AI for research acceleration and data-driven problem solving in STEM—rooted in reproducibility and responsible practice. We build around the global understanding that AI adoption must be human-centred, transparent where necessary, and accountable at every stage.
We help researchers, engineers, and analysts build models that can be evaluated, explained, and deployed—not just demonstrated in a notebook and abandoned.
Closed-loop design, sustainable materials, lifecycle thinking, and measurable circularity outcomes. We design and teach circular approaches grounded in widely accepted principles: eliminate waste and pollution, circulate products and materials, regenerate nature.
Circularity is not a compliance exercise here—it is a systems design discipline grounded in evidence, and we teach it as such. Every output from this studio produces at least one measurable circularity indicator.
Precision methods, IoT-enabled decision support, and climate-resilient practices. Precision agriculture is understood as a data-driven management approach that can improve yields and reduce inputs such as water and fertilisers—we translate these concepts into deployable field tools and training pathways.
Our climate-smart agriculture work follows the FAO triple-objective framework: productivity and incomes, adaptation to climate change, and where possible, emissions reduction—with local acceptability and practical deployability as non-negotiables.
Beyond the three studios, Rankine runs a cross-disciplinary collaboration layer: convenings, joint problem framing, and multi-stakeholder delivery. This is where the lab acts as a translation engine between AI practitioners, environmental scientists, agronomists, and implementers—turning ambitious ideas into field-ready capability.
We align our work with human-centred, rights-aware approaches to AI and education. Human rights and dignity, transparency, fairness, human oversight, and sustainability impacts are built into our project delivery—not added at the end. We draw on UNESCO's AI ethics recommendations, UN system principles for ethical AI, and international guidance on responsible innovation in development and education contexts.
Explore our programmes or start a partnership conversation.
Named, outcome-based learning experiences—not generic training. Every programme produces a tangible deliverable you can use and show.
Fast-track immersion in applied machine learning for scientific and engineering contexts. From problem framing to model evaluation to reproducible reporting—using the Rankine Method throughout. Designed for people who want to build, not just learn.
A structured sprint through circular economy principles—eliminate waste, circulate materials, regenerate nature—applied to a specific design challenge your team brings to the table. Grounded in Ellen MacArthur Foundation definitions and lifecycle thinking.
A cohort-based programme applying IoT sensor data, precision analytics, and climate-smart decision frameworks to real agricultural challenges. Built around FAO-grounded methods and locally relevant problem sets. Includes mentored capstone project.
Embedding the Rankine Method inside an organisation—from responsible AI risk assessment to governance frameworks and deployment checklists. Aligned with UNESCO ethics guidance and UN system principles for ethical AI. Includes evaluation and handover.
We offer Organisation Sprints (2–4 weeks) to solve one defined problem—designing a farm decision dashboard, running a circularity baseline, or building a data pipeline. For deeper engagements, Capability Partnerships (3–12 months) embed the Rankine Method inside your organisation, including governance and evaluation infrastructure.
Tell us what you're trying to achieve and we'll help you find the right path.
Playbooks, explainers, templates, and lab notes designed to be immediately useful. Not a blog—a working resource library built on the same principles as our programmes.
Triple-objective framing, trade-offs, and local adaptation—a practical entry point grounded in FAO definitions for decision-makers and practitioners.
Download Pack →Data streams, sensors, AI tools, and resource efficiency—what precision agriculture actually looks like at smallholder scale and what data it needs.
Download Pack →System logic and design principles for circularity—eliminate waste, circulate materials, regenerate nature—grounded in Ellen MacArthur Foundation definitions.
Download Pack →Human rights, transparency, oversight, and sustainability impacts—a practical checklist aligned with UNESCO AI ethics guidance and UN system principles.
Download Pack →Step-by-step application of Sense → Model → Prove → Scale to a real-world agriculture or infrastructure problem, with worked examples.
Download →Governance questions and evaluation prompts to answer before deploying any AI tool in a field, education, or organisational context.
Download →Applied ML approaches to pipe failure prediction in water distribution networks—what data is needed, what the models can do, and what the limits are.
Read Note →How machine learning and large language models are improving scalability and real-world adoption of bio-based geotechnical alternatives.
Read Note →Plain-language definitions of terms across responsible AI, circular economy, and climate-smart agriculture—for practitioners, not academics.
Browse →Our research is designed to produce deployable knowledge: prototypes, evaluation results, and reusable methods—not just papers.
Each programme is designed to produce at least one tangible artefact—a tool, dataset, pilot report, or prototype—alongside training modules and open explainers.
Scope covers water, energy, soil, and climate data. This programme links STEM modelling directly with sustainability outcomes and measurable impact. Governance tools like ethical impact assessment approaches—aligned with UNESCO's framing that sustainability impacts and data governance must be built into AI deployment from the start—are embedded into project delivery.
Scope covers lifecycle thinking, circular design, materials flow, and "waste as resource" frameworks. Circular economy means keeping materials and products in circulation as long as possible, reducing material use, redesigning products, and recapturing waste as a resource. Our tools make these principles measurable and actionable in real project contexts.
Scope covers decision support in smallholder and commercial contexts, adaptive practices, and measurement of resource efficiency. FAO's triple-objective framework guides our approach, with local acceptability and practical deployability as non-negotiables in every output we produce.
Scope covers playbooks, explainers, curated policy-to-practice notes, and replicable training labs. AI and development ecosystems need not just tools, but evaluation infrastructure, knowledge exchange, and learning loops. This programme builds and maintains that infrastructure for the lab and its partners.
The lab is founded on active, peer-validated research at the frontier of AI, infrastructure resilience, and sustainable materials—giving every programme and partnership real credibility.
PhD researcher at PolyU whose doctoral work focuses on understanding and predicting pipe failures in water distribution networks using multi-method approaches and advanced machine learning—aimed at improving the sustainability and management of critical water infrastructure.
His published work at PolyU Scholars Hub includes modelling and decision-support approaches for productivity and planning in modular integrated construction, reflecting applied AI for real-world systems management.
PhD Candidate and Graduate Research Associate at ASU. BEng from Federal University of Technology Akure; MEng with distinction from the University of Johannesburg. His research pioneers biogeotechnics—exploring fungal mycelium as a sustainable alternative to conventional geotechnical materials, with a focus on real-world performance and scalability.
He integrates machine learning and large language models into his methodology to improve real-world adoption. Recognised as a Digital GreenTalent awardee by the German Federal Ministry of Education and Research, with fellowships linked to the American Society of Civil Engineers.
We work with universities, NGOs, public agencies, and companies to design and deliver research that is deployable, measurable, and responsibly governed. Every collaboration produces at least one reusable artefact—a tool, dataset, pilot report, or prototype.
We translate advanced technology into practical capability—grounded in rigorous research, responsible practice, and real-world relevance.
To translate advanced technology into practical capability that improves sustainability outcomes across STEM, circular systems, and agriculture.
Where many organisations stop at awareness—talks, inspiration, generic "AI 101"—Rankine Innovation Lab is execution-led. Learners and organisations leave with shipped prototypes, validated methods, reusable playbooks, and measurable outcomes. The difference between knowing about AI and being able to deploy it responsibly is exactly the gap we exist to close.
A future where innovation ecosystems across regions can build, validate, and deploy technology responsibly—so sustainable development is measurable, scalable, and locally owned.
Human-centred by design
Practical and field-ready
Transparent about evidence and limitations
Collaborative and cross-disciplinary
Sustainability-first systems thinking
The lab was co-founded by two researchers working at the frontiers of AI, infrastructure resilience, and sustainable materials—bringing real, field-tested credibility to everything the lab does.
Co-Founder · PhD Researcher, The Hong Kong Polytechnic University
Ridwan is a researcher whose work focuses on applying advanced modelling and machine learning to improve the management and sustainability of critical infrastructure systems. His doctoral work at The Hong Kong Polytechnic University (Department of Building and Real Estate) examines water distribution network failures and develops predictive models to support better resource allocation and preventive decision-making.
His published work at PolyU Scholars Hub includes modelling and decision-support approaches for productivity and planning in modular integrated construction—reflecting a consistent focus on applied AI for complex real-world systems.
Co-Founder · PhD Candidate & Graduate Research Associate, Arizona State University
Adesola is a PhD Candidate and Graduate Research Associate at Arizona State University in Civil, Environmental and Sustainable Engineering. He holds a BEng from Federal University of Technology Akure and an MEng with distinction from the University of Johannesburg. His research pioneers biogeotechnics—exploring fungal mycelium as a bio-based alternative to conventional geotechnical materials, with a focus on real-world performance and scalability.
He integrates machine learning and large language models into his methodology to improve scalability and real-world adoption. He is a Digital GreenTalent awardee recognised by the German Federal Ministry of Education and Research, with fellowships linked to the American Society of Civil Engineers.
We align our approach with globally recognised guidance on ethical AI governance and human-centred technology adoption—particularly in education and sustainability contexts. This includes UNESCO's AI ethics recommendation, which positions human rights, dignity, transparency, fairness, and human oversight as core to AI deployment; UNESCO's generative AI guidance for education; and the UN system's principles for ethical AI use, which emphasise lifecycle ethics, do-no-harm, privacy, transparency, accountability, and inclusion.
Our commitment is not performative. Responsible AI checks are embedded into every programme, every prototype, and every partnership we deliver. We are transparent about evidence, honest about limitations, and explicit about trade-offs—because that is what serious, trustworthy practice looks like.
Explore our programmes, browse the Knowledge Hub, or start a conversation.
Tell us what you're trying to achieve and what constraints matter. We respond to every message.
Whether you're a learner exploring our programmes, an organisation looking to build capability, or a researcher interested in collaborating—we want to hear from you.