diff --git a/content/about.md b/content/about.md index 3abda6d..313453f 100644 --- a/content/about.md +++ b/content/about.md @@ -6,7 +6,9 @@ aliases: **I'm Adam Green, an energy engineer turned data professional** 👋 -I'm a data professional working on the energy transition towards a cleaner grid. I enjoy building tools & models for prescriptive analytics. +I'm a data professional working on the energy transition. I enjoy building tools & creating models. + +I write this blog so I can better express ideas and concepts in conversations with friends and colleagues. In my fifteen year career, I've worked: diff --git a/content/blog/how-i-ai-development.md b/content/blog/how-i-ai-development.md new file mode 100644 index 0000000..adafe69 --- /dev/null +++ b/content/blog/how-i-ai-development.md @@ -0,0 +1,223 @@ +--- +title: How I use AI for Development +description: TODO +date_created: 2026-04-05 +competencies: + - AI + - Software Engineering + - How I +--- + +Titles +- my notes on ai development +- how i use ai for programming +- how i program with ai +- XXX levels of ai programming development + +## Introduction + +This blog post is a snapshot in time of how I use AI for development (aka coding, programming). + +## The Basics + +Some of the basic knowledge needed to work with large language models (LLMs). Most apply across non-development tasks as well. + +### LLMs are Random + +This means the same prompt should not be expected to return the same thing + +LLMs should be treated as stochastic (ie random). A few of the things that can change between one prompt and another when using a cloud based LLM: + +- Configured to be stochastic at the token generation level +- Can be retrained (model parameters changed) +- Tools & skills can be changed (tool & skill markdown changed) + +This stochastic nature is complemented by a non-stationary environment: +- Models keep improving, making previously necessary instructions counterproductive +- Models will be retired and taken away + +### Context is King + +Context is the text available to an LLM. It includes the system prompt (set in secret by the LLM provider), your user messages and the response from the LLM. + +``` +TODO - example of system prompt, user message and response, and how they all interact to create the context for the next prompt +``` + +Context is important as the LLM uses this context to generate the next response. Managing context provided to an LLM is perhaps the key skill in using LLMs. + +This means you need to master a few things + +- How to add custom instructions that are added to context each time you use a LLM +- When to add information into context - examples +- When to reset the context (by starting a new session) + +Custom instructions are perhaps the highest value tip - when you start using any AI tool, your first thing to configure should be custom instructions. Commonly custom instructions will be added into every prompt, making them a good place for steering how you want an AI to behave based on general or specific instructions. + +Behaviour you always (or almost always want) + +- Ask to be more concise +- Ask to push back and offer alternative ideas +- Apply coding standards (`all Python staticially typed`) + +``` +TODO - example of my custom instructions +``` + +### Hallucinations + +LLMs can make up facts. How likely this is depends on how good you are at managing the context (see above), or using workflows that have valiadion built in (see below). + +LLM users that are highly skilled at managing context (adding or resetting) will experience fewer hallucinations. + +### Security + +`/sandbox` + +prompt injection + +source control helps, but if you are letting your agent do `rm -rf`, that risk exists - it's a tradeoff + +## Tools + +### Chat + +I've used two LLM chat apps - OpenAI (from XXXX to XXXX) & Claude (from XXXX to now). Today, Claude is the superior product - I would not recommend anything else. + +Never expect that your chat history will not be trained on - even if the current ToS says it won't be. At least it's a risk. + +The things you need to be able to do with a AI chat app is how to configure instructions, and perhaps use different models if you are hitting usage limits for more powerful models. + +### IDE + +IDE can matter a lot here - Cursor is a different philosophy of IDE versus vanilla VS Code. + +Skills +- source control +- autocomplete (commonly github copilot) +- edit prompts natively in your editor, copy paste between the two easily +- quickly applying AI generated diffs +- jump to next place (cursor functionality) + + — apply nearest diff to the source buffer +gj — jump to the section of nearest diff +gd — show diff between source and nearest diff +gqd — add all diffs to quickfix list + +### Terminal + +Openrouter to get cheap models - Kimi, GLM 5 prices verus Sonnet & Opus. Then need to use PI. + +Agent harness (PI, Claude Code) versus model provider (OpenAI, Anthropic) + +I've used two terminal coding agents - Claude Code & PI. While I like pi and will keep checking it out, Claude Code is the superior product. Even if other coding agents can use the same base LLM (like Claude Opus), differentiators such as tools, system prompts or TUI performance matter a lot. + +Validation is particularly powerful for terminal coding agents, as you can `set and forget` and rely on the test validation (unit tests, linting, type checking etc) to keep the agent on track. Make sure to check that the agent has not changed the test code. + +Skills +- source control +- selecting models +- creating a `CLAUDE.md` or `AGENTS.md` +- slash commands +- compacting memory +- /context - see where context is going +- creating skills + +Don't learn MCP. + +### Asynchronous + +Scheduled + +shouldn't run ai agents 24/7 + +my first 4 agents +- cross-references +- tool searcher - looking at the tools i use in my brewfile, and proposing additions + +## Models + +Claude, Kimi, GLM, Qwen + +## Workflows + +### Recursive Planning into Execution + +Plan loop +- Cross-model review (e.g., plan with Gemini, implement with Claude, review with Codex) surfaces different blind spots + +explicitly say dont implement yet + +ask for a todo list, that can serve as a progress tracker + +resetting context + +want to uncover an agents assumptions +- great at syntax +- but different assumptions are problems + +Use a concrete file as the plan +- can be edited, add notes, persistls +- should plans be edited, ro should you leave notes? + +Plans all go in same place + +Edit existing plan if needed + +Solves two problems - session management + +Goal of plan/research is to + +### Iterative Validation against Tests + +### Skills - A Few Custom Skills + +Starting out here - just a markdown file + +A few small skills helps reduce repetitive prompting + +Value here is the locality - skills relevant to your workflow = 1000% more valuable than a custom skill (custom skills can be bad if they have different values) + +Third party = risky +- Everyone's workflow differs; junk-drawer skills add nondeterminism and blow up context + +Skill - load on demand - description allows to not load entire skill into context, specialized knowledge, if you explain repeatedly, this is skill waiting to be written +- `~/.claude/skills` - global +- `.claude/skills` - project + +``` +--- +name: name +description: description (used to determine whether this skill should be used - Claude (Code?) specific) +--- + +``` + +### Reviewer + +### Teacher + +To get the most out of chat gpt, use it as a teacher + +Chat GPT can't teach you everything - but it can teach you a lot. + +Teach a Python developer Javascript by converting Python code to the Javascript equivalent. + +Teach you SQL by creating both the raw SQL and SQLAlchemy Python code to create a database table from a dictionary. + +Better on more popular languages. + +Ways in which good teacher: + +- patient, +- doesn't require any time to context switch between problems, +- can handle malformed & messy inputs, +- knowledgeable. + +Chat GPT will make mistakes. Chat GPT often hallucinates - below it creates documentation for a Python package that doesn't exist: + +This means you must remain vigilant when working with Chat GPT. See this scepticism as a way to keep you honest and engaged. This tendency to hallucinate means you always need to think about what Chat GPT has generated. + +should REWRITE all code it generates - less you know the code, more you should be rewriting everything + +idea = **REWRITE AI** diff --git a/content/blog/popular-statistics-books.md b/content/blog/popular-statistics-books.md new file mode 100644 index 0000000..23392e7 --- /dev/null +++ b/content/blog/popular-statistics-books.md @@ -0,0 +1,262 @@ +--- +title: A Few Things I've Learnt from Popular Statistics Books +description: Why statistical thinking is fundamentally about embracing uncertainty, not eliminating it. +date_created: 2026-04-05 +date_updated: 2026-04-12 +competencies: +- Statistics +aliases: [] +--- + +Over the past few years I've read a shelf of popular statistics books. I'm not sure how many I'd need to read to get over my statistics imposter syndrome. + +This post highlights that **statistical thinking is fundamentally about decision-making under uncertainty**. The value of statistics isn't in the numbers themselves, but in how they shape our choices. + +**Statistics requires simplification**. Whether you're building a predictive model or calculating an average, you're throwing away information to make the problem tractable. That tractability however is what is needed for statistics to drive decision making. + +**What follows are a few insights from some of the popular statistics books I've read over the years**. Each section covers a single book, organised around a central idea that has changed how I think about data, decisions, and the gap between models and reality. + +The approximate order is by increasing technical depth, starting with the societal implications of models and ending with the mathematical principles underlying statistical thinking. + +## Weapons of Math Destruction + +*Cathy O'Neil* + +### The Limits of Predictive Modelling + +> No model can include all of the real world's complexity or the nuance of human communication. Inevitably, some important information gets left out. + +Predictive modelling (or any kind of modelling) always requires losing or abstracting away details of the real world. + +> Our own values and desires influence our choices, from the data we choose to collect to the questions we ask. Models are opinions embedded in mathematics. + +The bias of predictive models comes from data and the choices made by the statistical modeller. + +### Flywheels + +> This creates a pernicious feedback loop. The policing itself spawns new data, which justifies more policing. + +Prediction models can create feedback loops, where the predictions made by a model influence the data used to validate and train future models. + +## The Signal and the Noise + +*Nate Silver* + +### Bias, Variance and Capacity + +Predictive modelling aims to find signal amongst noise: + +> The goal of any predictive model is to capture as much signal as possible and as little noise as possible. + +The balance between these two creates the balance between bias, variance and model capacity. + +A high capacity model is a complicated model, which will overfit to the training data: + +> Needlessly complicated models may fit the noise in a problem rather than the signal, doing a poor job of replicating its underlying structure and causing predictions to be worse. + +One approach to reducing bias is through diversity - different models can capture different parts of the signal: + +> It's critical to have a diversity of models. + +### Probability is about Decision Making + +The true value of probabilistic thinking is to improve your own thinking: + +> The virtue in thinking probabilistically is that you will force yourself to stop and smell the data-slow down, and consider the imperfections in your thinking. Over time, you should find that this makes your decision making better. + +## Naked Statistics + +*Charles Wheelan* + +### Statistics is about simplifying the world + +> Descriptive statistics exist to simplify, which always implies some loss of nuance or detail. Anyone working with numbers needs to recognize as much. + +The value of simplification is that we can understand the world. The cost of this simplification is a loss of detail. + +## Calling Bullshit + +*Carl T. Bergstrom & Jevin D. West* + +### Brandolini's principle + +Part of the struggle of the rational, statistical person is Brandolini's principle: + +> Perhaps the most important principle in bullshit studies is Brandolini's principle. Coined by Italian software engineer Alberto Brandolini in 2014, it states: "The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it." + +### Data Quality + +Data is king - the quality of the data is the most important factor in any analysis: + +> If the data that go into the analysis are flawed, the specific technical details of the analysis don't matter. + +> Begin with bad data and labels, and you'll get a bad program that makes bad predictions in return. + +### Types of Probability + +There are three useful types of probability: + +- **Marginal probability**: $P(A)$ — the probability of $A$ occurring +- **Conditional probability**: $P(B|A)$ — the probability of $B$ occurring given $A$ has occurred +- **Joint probability**: $P(A,B)$ — the probability of $A$ and $B$ occurring together + +> There is a key distinction between a probabilistic cause (A increases the chance of B in a causal manner), a sufficient cause (if A happens, B always happens), and a necessary cause (unless A happens, B can't happen). + +Translating these into probability statements: + +- **Probabilistic cause**: $A$ raises the chance of $B$, or $P(B|A) > P(B)$ +- **Sufficient cause**: $A$ guarantees $B$, or $P(B|A) = 1$ +- **Necessary cause**: $B$ cannot occur without $A$, or $P(B|A^c) = 0$ ($A^c$ is the complement of $A$) + +## The Flaw of Averages + +*Sam L. Savage* + +### Average Abuse + +> Plans based on average assumptions are wrong on average. + +The average is the most commonly used statistic, so is also the most commonly abused. + +> To understand how pervasive the Flaw of Averages is, consider the hypothetical case of a marketing manager who has just been asked by his boss to forecast demand for a new-generation microchip. "That's difficult for a new product," responds the manager, "but I'm confident that annual demand will be between 50,000 and 150,000 units." "Give me a number to take to my production people," barks the boss. "I can't tell them to build a production line with a capacity between 50,000 and 150,000 units!" The phrase "Give me a number" is a dependable leading indicator of an encounter with the Flaw of Averages, but the marketing manager dutifully replies: "If you need a single number, I suggest you use the average of 100,000." The boss plugs the average demand, along with the cost of a 100,000-unit capacity production line, into a spreadsheet model of the business. The bottom line is a healthy \\$10 million, which he reports as the projected profit. Assuming that demand is the only uncertainty and that 100,000 is the correct average (or expected) demand, then \\$10 million must be the average (or expected) profit. Right? Wrong! The Flaw of Averages ensures that on average, profit will be less than the profit associated with the average demand. Why? If the actual demand is only 90,000, the boss won't make the projection of \\$10 million. If demand is 80,000, the results will be even worse. That's the downside. On the other hand, what if demand is 110,000 or 120,000? Then you exceed your capacity and can still sell only 100,000 units. So profit is capped at \\$10 million. There is no upside to balance the downside, as shown in Figure 1.1, which helps explain why, on average, everything is below projection. + +### Statistics is about Decisions + +Statistics is about decisions - any piece of work done should always be pointing towards influencing how a decision is made: + +> So what's a fair price for a piece of information? Here's a clue. If it cannot impact a decision, it's worthless. + +### Simpson's Paradox + +> Simpson's Paradox occurs when the variables depend on hidden dimensions in the data. + +Simpson's paradox is a phenomenon in statistics where a signal appears when data is aggregated, but disappears when the data is disaggregated. The classic example of Simpsons paradox is a study on gender bias in university admissions. + +Data aggregated across all departments showed a bias against women, but when the data was disaggregated, the data showed that while four departments were biased against women, six were biased against men. The bias against women detected in the aggregated data occurred due to women being more likely to apply to more competitive departments. + +In the case of the quote above, the hidden dimension is the department the students applied to. + +## Fooled by Randomness + +*Nassim Nicholas Taleb* + +### Profiting off Variance + +> Mild success can be explainable by skills and labor. Wild success is attributable to variance. + +There is a lot of noise in high performance outcomes, and it's easy to attribute that performance to skill when it is due to luck. + +> Accordingly, it is not how likely an event is to happen that matters, it is how much is made when it happens that should be the consideration. + +Lessons from first year engineering - `risk = hazard * probability`. + +### The Danger of Data + +> A small knowledge of probability can lead to worse results than no knowledge at all. + +> The problem with information is not that it is diverting and generally useless, but that it is toxic. + +> The problem is that, without a proper method, empirical observations can lead you astray. + +> It is a mistake to use, as journalists and some economists do, statistics without logic, but the reverse does not hold: It is not a mistake to use logic without statistics). + +Data driven (inductive) thinking is not the only way - deductive thinking from principles and assumptions is important as well. + +## Statistics Done Wrong + +*Alex Reinhart* + +> Much of basic statistics is not intuitive (or, at least, not taught in an intuitive fashion), and the opportunity for misunderstanding and error is massive. + +Statistics is certainly unintuitive, but with enough work (learning from the past) it can become obvious. + +> Surveys of statistically significant results reported in medical and psychological trials suggest that many p values are wrong and some statistically insignificant results are actually significant when computed correctly. +> +> Even the prestigious journal Nature isn't perfect, with roughly 38% of papers making typos and calculation errors in their p values. Other reviews find examples of misclassified data, erroneous duplication of data, inclusion of the wrong dataset entirely, and other mix-ups, all concealed by papers that did not describe their analysis in enough detail for the errors to be easily noticed. + +An almost 40% error rate of using p-values in one of the world's top academic journals maybe suggests it's not a good way to determine statistical significance. + +### Importance of Sharing Data + +> Next Wicherts and his colleagues looked for a correlation between these errors and an unwillingness to share data. There was a clear relationship. +> +> Authors who refused to share their data were more likely to have committed an error in their paper, and their statistical evidence tended to be weaker. Because most authors refused to share their data, Wicherts could not dig for deeper statistical errors, and many more may be lurking. + +One principle I have of data systems is reproducibility - for example with machine learning, that it's possible to easily reproduce a model or any predictions it makes. + +This reproducibility is a kind of sharing data - it's sharing with your future self. + +## Map and Territory + +*Eliezer Yudkowsky* + +### Biased Sampling + +> When your method of learning about the world is biased, learning more may not help. Acquiring more data can even consistently worsen a biased prediction. + +When starting out learning machine learning, I only appreciated that model predictive performance scaled with data. In reality, sometimes more data is bad. + +### Proportion Dominance Effect + +> A proposed health program to save the lives of Rwandan refugees garnered far higher support when it promised to save 4,500 lives in a camp of 11,000 refugees, rather than 4,500 in a camp of 250,000. + +**Context changes how numbers are interpreted.** Presenting and converting absolute and relative measures is a key skill of working with data. + +### Cognitive versus Statistical Biases + +> A cognitive bias is a systematic error in how we think, as opposed to a random error or one that's merely caused by our ignorance. Whereas statistical bias skews a sample so that it less closely resembles a larger population, cognitive biases skew our thinking so that it less accurately tracks the truth (or less reliably serves our other goals). + +## How Not to Be Wrong + +*Jordan Ellenberg* + +### Solve Easy Problems + +> A basic rule of mathematical life: if the universe hands you a hard problem, try to solve an easier one instead, and hope the simple version is close enough to the original problem that the universe doesn't object. + +An example of this from my career is using linear models to approximate engineering relationships that are non-linear (such as the relationship between efficiency and load on a gas turbine). + +### Non-Linearity + +> Nonlinear thinking means which way you should go depends on where you already are. + +Non-linearity is closely related to state - the state of the system is important. How variables change depends on where you are. With linear relationships, the state of the system is irrelevant. + +### Improbable Things Happen A Lot + +> The universe is big, and if you're sufficiently attuned to amazingly improbable occurrences, you'll find them. Improbable things happen a lot. + +This is Littlewood's Law - that a person can expect to experience events with odds of one in a million at the rate of about one per month. + +Learning to expect that unexpected things happen a lot is one of my most treasured lessons from statistics. + +## Summary + +**Statistics is not about certainty—it's about making better decisions when certainty is impossible.** + +Every author here converges on this point, whether discussing predictive models, p-values, or the dangers of averages. The goal is never perfect knowledge. It's clearer thinking about uncertainty, and the humility to recognise when your model has simplified away something important. + +The best statistical thinking is **sceptical without being cynical**, quantitative without being numerically naive. Every summary statistic embeds a value judgment about what matters. In a world increasingly run by algorithms and awash in data, these lessons aren't just useful—they're essential. + +The **Popular Statistics Books Reading List** is: + +- [**Weapons of Math Destruction**](https://www.amazon.com/Weapons-Math-Destruction-Increases-Inequality/dp/0553418815) + *Cathy O'Neil* — the limits of predictive modelling and destructive feedback loops +- [**The Signal and the Noise**](https://www.amazon.com/Signal-Noise-Many-Predictions-Fail/dp/0143125087) + *Nate Silver* — bias, variance, and probabilistic decision making +- [**Naked Statistics**](https://www.amazon.com/Naked-Statistics-Stripping-Dread-Data/dp/039334777X) + *Charles Wheelan* — statistics as simplification +- [**Calling Bullshit**](https://www.amazon.com/Calling-Bullshit-Sophistry-Data-Science/dp/0525509186) + *Carl T. Bergstrom & Jevin D. West* — Brandolini's principle and data quality +- [**The Flaw of Averages**](https://www.amazon.com/Flaw-Averages-Underestimate-Risk-Inevitably/dp/1118073754) + *Sam L. Savage* — the abuse of averages and decision-focused statistics +- [**Fooled by Randomness**](https://www.amazon.com/Fooled-Randomness-Hidden-Markets-Incerto/dp/1400067936) + *Nassim Nicholas Taleb* — variance, survivorship bias, and the dangers of data +- [**Statistics Done Wrong**](https://www.amazon.com/Statistics-Done-Wrong-Analysis-Scientist/dp/1593276206) + *Alex Reinhart* — p-value problems and the importance of sharing data +- [**Map and Territory**](https://www.amazon.com/Map-Territory-Rationality-Sequence-ebook/dp/B07LDF7J3Q) + *Eliezer Yudkowsky* — cognitive versus statistical biases +- [**How Not to Be Wrong**](https://www.amazon.com/How-Not-Be-Wrong-Mathematical/dp/0143127535) + *Jordan Ellenberg* — mathematical thinking and improbable events + +Thanks for reading! diff --git a/content/consulting.md b/content/consulting.md new file mode 100644 index 0000000..dc6ed96 --- /dev/null +++ b/content/consulting.md @@ -0,0 +1,67 @@ +--- +title: "Consulting" +description: "Data science and engineering consulting for clean energy" +aliases: + - /services/ + - /work-with-me/ +--- + +I help energy companies and clean-tech startups **use data to accelerate the energy transition**. + +With 10+ years across energy engineering and data science, I bridge the gap between domain expertise and technical implementation. + +## Services + +### Energy Systems Optimization + +Using mixed-integer linear programming to model and optimize: + +- **Solar and battery storage**: Feasibility, sizing, and dispatch optimization +- **Electric vehicle charging**: Smart scheduling and grid integration +- **District energy**: Techno-economic modeling of combined heat and power + +I've applied these techniques at small startups and large energy utilities + +### Data Engineering + +Building the infrastructure for data-driven energy companies: + +- **Data pipelines**: ETL workflows with Prefect, scheduled on AWS +- **Cloud infrastructure**: Lambda, ECS, RDS, S3 on AWS +- **CI/CD**: GitHub Actions, automated testing, infrastructure as code +- **Databases**: Postgres with SQLAlchemy and Alembic migrations + +### Machine Learning for Energy + +Applying ML to energy problems: + +- **Time series forecasting**: Electricity prices, demand, renewable generation +- **Reinforcement learning**: Optimal control of batteries and flexible loads +- **MLOps**: Model deployment, monitoring, and retraining pipelines + +### Technical Mentoring + +Helping teams level up: + +- **Training**: Custom workshops on Python, ML, data engineering +- **Code review**: Best practices, testing, and maintainability +- **Upskilling**: Supporting junior engineers through mentoring + +As Bootcamp Director at Data Science Retreat, I trained 40+ data scientists and led the school to its first SwitchUp Best Bootcamp award. + +## How I Work + +**Advisory** - Strategic guidance on data and ML initiatives, typically a few hours per month. + +**Project-Based** - Delivery of specific solutions like optimization models or data pipelines, scoped to clear outcomes. + +**Embedded** - Part-time integration with your team for ongoing support and knowledge transfer. + +**Training** - Workshops and mentoring for upskilling your team in data science and engineering. + +## Let's Talk + +If you're working on the energy transition and need help with data, I'd love to hear from you. + +- **Email**: adam.green@adgefficiency.com +- **LinkedIn**: [Adam Green](https://www.linkedin.com/in/adgefficiency) diff --git a/content/demo.md b/content/demo.md new file mode 100644 index 0000000..ca05d0a --- /dev/null +++ b/content/demo.md @@ -0,0 +1,6 @@ +--- +title: Typography Demo +description: Comparing font options for the site redesign. +date_created: 2026-03-29 +layout: demo +--- diff --git a/hugo.toml b/hugo.toml index a8e80c4..0b743a8 100644 --- a/hugo.toml +++ b/hugo.toml @@ -4,7 +4,7 @@ title = 'ADGEfficiency' [markup] [markup.tableOfContents] startLevel = 2 - endLevel = 2 + endLevel = 3 ordered = false [markup.goldmark] [markup.goldmark.renderer] diff --git a/layouts/_default/demo.html b/layouts/_default/demo.html new file mode 100644 index 0000000..18d5ffe --- /dev/null +++ b/layouts/_default/demo.html @@ -0,0 +1,157 @@ +{{ define "main" }} + + +
+

Typography Options

+

Each option shows: nav, post list, and article content. Compare and pick.

+ + {{ $options := slice + (dict "class" "typo-a" "name" "A: System Sans-Serif" "desc" "Current site default. Fast, no font loading. Clean but generic.") + (dict "class" "typo-b" "name" "B: JetBrains Mono" "desc" "Monospace throughout (boristane-style). Developer aesthetic, compact. Slower to read long text.") + (dict "class" "typo-c" "name" "C: EB Garamond + Lato" "desc" "Serif headings + sans body (boz-style). Literary, elegant contrast. Two fonts to load.") + (dict "class" "typo-d" "name" "D: Inter" "desc" "Modern sans-serif designed for screens. Highly legible, professional. Popular (may feel generic).") + (dict "class" "typo-e" "name" "E: Source Serif 4" "desc" "Readable serif throughout. Warm, editorial feel. Good for long-form reading.") + (dict "class" "typo-f" "name" "F: Literata" "desc" "Screen-optimized serif. Designed for e-reading. Distinctive without being distracting.") + }} + + {{ range $options }} +
+
+ {{ .name }} + {{ .desc }} +
+ + + + + +
+
+ + Linear Programming for Energy Optimization + +
+
+ + Fine Tuning a Python Function Signature + +
+
+ + +
+

Why Linear Programming Matters

+

Linear programming is the most underused tool in a data scientist's toolkit. It lets you find optimal solutions to problems with linear constraints, which covers a surprising number of real-world scenarios in energy, logistics, and finance.

+

The key insight is that optimization is not the same as prediction. While machine learning asks "what will happen?", optimization asks "what should we do?" These are fundamentally different questions, and using pulp.LpProblem to answer the second one is often more valuable than a neural network.

+
    +
  • dispatch optimization: Scheduling battery charge and discharge cycles
  • +
  • portfolio allocation: Balancing risk and return across energy assets
  • +
  • network flow: Routing electricity through a transmission grid
  • +
+
+
+ {{ end }} +
+{{ end }} diff --git a/layouts/partials/right-sidebar.html b/layouts/partials/right-sidebar.html index f9f36d7..33629e5 100644 --- a/layouts/partials/right-sidebar.html +++ b/layouts/partials/right-sidebar.html @@ -1,7 +1,7 @@