U11G - Machine View
This page provides a machine-readable aggregation of content from u11g.com.
Work Experience
Senior Fullstack Engineer @ Camunda
2019-07 - Present | https://camunda.com/
Operating within a fully remote team, I have contributed to Camunda Cloud since its inception. The platform utilizes a **cloud-native workflow engine (Zeebe)** to execute BPMN processes. My role encompasses the entire technology stack, involving responsibility for both **backend and frontend system development**. We collaborate closely with the SRE organization and product teams to **deliver a comprehensive solution**.
Cloud Adviser @ SICK AG
2017-09 - 2019-06 | https://www.sick.com/
At SICK, a leader in sensor technology, I was tasked with **establishing a scalable DevOps organization** at a new location to support the **expansion of cloud offerings**. **Acting as a liaison** between Product Management and R&D, I **introduced agile and lean methodologies** to optimize the software development process.
The team successfully leveraged these techniques to efficiently create services such as the AppPool and AssetHub.
Team Lead Order Management @ IONOS
2011-08 - 2019-06 | https://www.ionos.de/
As a **founding member of a new development team**, I played a key role in building the **next-generation order management platform** for IONOS. The initial phase focused on implementing the platform using BPMN processes, involving close collaboration with business support systems teams and stakeholders to align requirements. Following the launch of the initial process, I advanced to **Lead Developer**, overseeing the rollout of subsequent international order processes.
In my capacity as Team Lead, I **unified development and operations teams into a DevOps organization**, assuming **plan-build-run responsibility** for the order management platform. Beyond continuous platform development and requirement implementation, I also managed the migration of several legacy systems to the new architecture.
Research Associate @ FZI
2011-01 - 2011-07 | https://www.fzi.de/
At the FZI, I **completed my diploma thesis in healthcare** and pursued a **PhD in Ambient Assisted Living**. I **collaborated with European research institutions** on projects such as universAAL. Ultimately, I decided to transition from academic research to industry application.
The FZI Forschungszentrum Informatik (Research Center for Information Technology), is a non-profit research institute for applied computer science in Karlsruhe, Germany. FZI was established in 1985. FZI has very close collaborations with Karlsruhe Institute of Technology (KIT), but is not affiliated to KIT Karlsruhe.
Software Engineer and Consultant @ Camos
2009-05 - 2010-12 | https://www.camos.de/
Camos marked the beginning of my professional career following graduation. I successfully navigated the intersection of software development and customer consulting, gaining valuable client-facing experience. My responsibilities included the **initial deployment of the core product**, **conducting customer training** on the development kit, and **tailoring the core product** to meet specific client requirements.
CAMOS is a German software company that specializes in developing and providing enterprise resource planning (ERP) software for small and medium-sized businesses. Their software, CAMOS ERP, offers a range of features such as financial management, supply chain management, and project management. The software is designed to be easy to use, and can be customized to fit the specific needs of a business. CAMOS also provides support and training services to help customers get the most out of their ERP software.
Master of Science in Computer Science @ Karlsruhe Institute of Technology
2003-10 - 2009-04 | https://www.kit.edu/
My passion for programming began at age ten with Turbo Pascal, setting a clear path for my future career. This led me to pursue Computer Science at KIT, where I specialized in **Information Systems and Telematics**, completing my **diploma thesis in Health Care**.
The Karlsruhe Institute of Technology (KIT) is a public research university located in Karlsruhe, Germany. It was formed in 2009 by the merger of the University of Karlsruhe and the Karlsruhe Research Center. KIT focuses on natural sciences, engineering, and technology, and offers a wide range of undergraduate and graduate degree programs. KIT is also a member of the TU9 German Institutes of Technology, a group of the nine leading technical universities in Germany.
Projects
Aime Directory
Year: 2025 | Tags: typescript, astro, ai, mcp, llm, directory, vscode, developer-tools, prompts, github-copilot | [Website](https://aime.directory)
Curated tools & knowledge for building modern AI-powered software. A comprehensive directory of Model Context Providers (MCPs), VSCode configs, prompts, instructions, articles, and tools for AI-assisted development.
Aime Directory is a curated platform for developers building AI-powered software. It provides a comprehensive collection of resources including Model Context Providers (MCPs), VSCode configurations, prompts, instructions, articles, and tools.
The directory features searchable and indexed content covering:
* MCPs for extending AI capabilities
* VSCode configuration presets for GitHub Copilot and other AI tools
* Ready-to-use prompts for common development tasks
* Framework-specific instruction files for TypeScript, Angular, Nest.js, and more
* Curated articles about AI development practices
* Developer tools for AI-assisted coding
Built with modern web technologies and kept intentionally minimal, Aime Directory helps developers discover and utilize the best tools and practices for AI-powered development workflows.
Camunda Directory
Year: 2025 | Tags: typescript, astro, camunda, bpmn, workflow, process-automation, directory, connectors, developer-tools | [Website](https://camunda.directory)
A curated directory of resources, tools, and best practices for the Camunda ecosystem. Discover connectors, templates, plugins, and community contributions for process automation and workflow orchestration.
Camunda Directory is a comprehensive resource hub for the Camunda process automation ecosystem. It serves as a central discovery platform for developers and business process experts working with Camunda Platform, Camunda Cloud, and related workflow technologies.
All data is fetched from public resources.
Boring Dev Tools
Year: 2025 | Tags: typescript, react, developer-tools, utilities, local-first, privacy, jwt, json, cloudflare | [Website](https://boringdevtools.com)
A collection of essential developer utilities that work locally in your browser. No fancy UIs, no backends, no data collection - just the tools you need including JWT decoder, URL decoder, Cron expressions, JSON formatter, and more.
Boring Dev Tools is a collection of essential developer utilities designed with simplicity and privacy in mind. All tools run entirely in your browser with no backend required, ensuring your data never leaves your machine.
Key features:
* **Local First**: Everything runs in your browser, no internet connection required after initial load
* **Privacy Focused**: Your data is never sent to any server - it stays on your device
* **Browser Persistence**: Uses local storage to save your data between sessions
* **Boring Design**: No distractions, just functional tools that get the job done
Available tools include JWT decoder/validator, URL encoder/decoder, Cron expression builder, JSON formatter/validator, Base64 encoder/decoder, hash generators, and more. New tools are added regularly based on developer needs.
Perfect for developers who value privacy, simplicity, and tools that just work without the overhead of accounts, authentication, or cloud services.
Data Democracy
Year: 2025 | Tags: typescript, data-analytics, business-intelligence, data-governance, self-service, data-literacy, transparency, decision-making | [Website](https://dd.u11g.com)
Empowering organizations to democratize data access and decision-making. Tools, resources, and best practices for making data accessible and actionable for everyone in your organization.
A tool to instantly understand the german political healthiness with understandable metrics and data.
Dev Pulse
Year: 2025 | Tags: typescript, react, developer-tools, analytics, team-health, productivity, metrics, dashboard, engineering-management | [Website](https://devpulse.u11g.com)
A platform for tracking developer productivity and team health metrics. Monitor burnout indicators, code velocity, and collaboration patterns to build sustainable, high-performing engineering teams.
Dev Pulse is an analytics platform designed to help engineering leaders understand and improve team health, productivity, and sustainability. It provides actionable insights into developer wellbeing and team dynamics without invasive monitoring.
Dynamic Pong
Year: 2025 | Tags: typescript, react, canvas, game, physics, ai-generated, github-copilot, interactive | [Website](https://dynamic-pong.u11g.com)
A multi-area competitive pong game where balls battle for territory. Watch as each ball defends its colored zone while trying to conquer others in this AI-designed physics playground.
Dynamic Pong is a competitive multi-area physics game that reimagines the classic Pong concept with territorial conquest mechanics. Instead of two paddles, multiple balls battle for control of a divided playing field.
**How It Works:**
* The playing field is divided into configurable areas (default: 4)
* Each area has a unique color pair: background and ball color (complementary)
* Each ball starts randomly positioned in its home area
* Balls move in physics-based trajectories, bouncing off walls at realistic angles
* When a ball enters enemy territory, it “conquers” it by removing square bricks
* A ball loses when its territory shrinks below 9x the ball’s size
* The losing area is transferred to the ball with the smallest remaining territory
**What Makes It Special:** This entire game was conceptualized and implemented using GitHub Copilot. I saw others implementing dynamic pong variations and wanted to explore how AI could help me build a more feature-rich version with configurable parameters and strategic depth.
**Configurable Options:**
* Field dimensions (square playing area)
* Number of competing areas
* Ball size (width/height)
* Game speed and physics parameters
Built with React and Canvas, Dynamic Pong demonstrates how AI-assisted development can bring creative game concepts to life quickly while maintaining clean, maintainable code.
Lifosy
Year: 2024 | Tags: typescript, react, local-first, productivity, dashboard, notes, habits, git, cross-platform, privacy, markdown | [Website](https://lifosy.com)
The Life Operating System - A local-first personal management platform with dashboards, notes, habits, tasks, and micro pages. Your data belongs to you, stored as files with infinite Git-based history and cross-platform sync.
Lifosy is a Life Operating System designed to help you organize and manage your personal and professional life without sacrificing data ownership or privacy. Built on the principle of “files over services,” Lifosy ensures your data always belongs to you.
**Core Philosophy:**
* **Ownership**: Your data is stored as files. If Lifosy disappears, your data stays with you
* **Universal Access**: Works on desktop, mobile, Windows, Mac, Linux, iOS, and Android
* **Simplicity**: Just editing files - whether through the editor or widgets
**Key Features:**
* **Customizable Dashboards**: Generate personal, work, or project dashboards in minutes with various widgets
* **Time-Travel History**: Infinite history powered by Git - never lose a thought or idea
* **Supercharged Notes**: Powerful editor that adapts to your thinking style
* **Habit Tracking**: Intelligent habit tracking to build consistency
* **Brag Documents**: Document your accomplishments and wins
* **Action Log**: Task tracking that actually gets things done
* **Micro Pages**: Free events, forms, biolinks, and points of interest without external services
* **RSS Reader**: Manage and read your RSS feeds efficiently
* **Local-First Sync**: Sync everything to your local machine with cloud backup
Lifosy combines the flexibility of file-based systems with the convenience of modern productivity apps, giving you complete control over your life’s data while maintaining simplicity and universal access.
dapphuntr
Year: 2022 | Tags: typescript, react, cloudflare, nodejs, ethereum, ens, web3, nft, ipfs, pinata, polygon, solidity, optimism, thegraph, nextjs, hardhat | [Website](https://ethglobal.com/showcase/web3hunt-43443)
DappHunter is for finding the coolest new dapps and/or for showcasing and receiving feedback on your dapps. Made without special backends, only smartcontracts, ipfs, and thegraph. This project was implemented during ETH Amsterdam 2022 Hackathon and won a prize by TheGraph.
Generate and mint your own punk! Since cryptopunks became very expensive and I’d like to get at least the art for me I’ve implemented my own punk project.
DiyPunks
Year: 2022 | Tags: typescript, react, cloudflare, ethereum, ens, web3, nft, ipfs, pinata, polygon, solidity | [Website](https://diypunks.xyz/)
Generate and mint your own punk! Since cryptopunks became very expensive and I'd like to get at least the art for me I've implemented my own punk project.
Generate and mint your own punk! Since cryptopunks became very expensive and I’d like to get at least the art for me I’ve implemented my own punk project.
EthME
Year: 2021 | Tags: typescript, angular, cloudflare, ethereum, ens, web3, opensea, firebase, newrelic, ipfs, pinata | [Website](https://ethme.at)
Your chic web3 identity and profile, powered by Ethereum and IPFS
Your chic web3 identity and profile, powered by Ethereum and IPFS
fcheat
Year: 2021 | Tags: nodejs, typescript | [GitHub](https://github.com/urbanisierung/fcheat)
fcheat is for all who cannot remember all the commands ;) It is a CLI that can be extended with your own commands. It helps you to find your commands quickly.
fcheat is for all who cannot remember all the commands ;) It is a CLI that can be extended with your own commands. It helps you to find your commands quickly.
Find the next victim
Year: 2021 | Tags: typescript, angular, netlify | [Website](https://u11g.com/findthenextvictim)
You are always looking for a volunteer? As the next chair for a meeting, as the next at the standup, ...? This miniapp helps you. Just publish a Google Spreadsheet with all participants and you are ready to go.
You are always looking for a volunteer? As the next chair for a meeting, as the next at the standup, …? This miniapp helps you. Just publish a Google Spreadsheet with all participants and you are ready to go.
Generative Arts
Year: 2021 | Tags: typescript, nodejs, canvas, angular, netlify, firebase, camunda | [GitHub](https://github.com/generative-arts)
This is a Github project I started with a colleague. We want to try out different techniques in Generative Art. We are also creating assets that we will use for a conference talk to generate art from a BPMN process using a BPMN process.
This is a Github project I started with a colleague. We want to try out different techniques in Generative Art. We are also creating assets that we will use for a conference talk to generate art from a BPMN process using a BPMN process.
SkunkWorks NFT
Year: 2021 | Tags: typescript, nodejs, canvas, ethereum, web3, nft | [Website](https://urbanisierung.dev/)
Skunk Works is a collection of NFTs - unique digital collectibles, working within the Ethereum Blockchain given a high degree of autonomy. 10k skunks have been programmatically generated from a wide range of combinations, each with unique characteristics and different traits.
Skunk Works is a collection of NFTs - unique digital collectibles, working within the Ethereum Blockchain given a high degree of autonomy. 10k skunks have been programmatically generated from a wide range of combinations, each with unique characteristics and different traits.
MakerDAO Delegates
Year: 2021 | Tags: typescript, angular, netlify, ethereum, etherscan, web3 | [Website](https://u11g.com/makerdaodelegates)
A webapp that aggregates and visualizes diverse data from the MakerDAO ecosystem regarding delegates.
A webapp that aggregates and visualizes diverse data from the MakerDAO ecosystem regarding delegates.
zukuNFT
Year: 2021 | Tags: typescript, nodejs, angular, opensea, nft, web3 | [Website](https://zunft.xyz)
100 OpenSea collections analyzed reg connecting collections
100 OpenSea collections analyzed reg connecting collections
RestZeebe
Year: 2020 | Tags: typescript, nodejs, angular, netlify, firebase, camunda, zeebe, express, serverless | [Website](https://restzeebe.app)
If you want to try out Camunda Clouds Workflow Engine without implementing one line of code Restzeebe will help you. Register Service Workers, send messages or start new instances from your browser.
If you want to try out Camunda Clouds Workflow Engine without implementing one line of code Restzeebe will help you. Register Service Workers, send messages or start new instances from your browser.
Websiteshot
Year: 2020 | Tags: typescript, nodejs, canvas, angular, netlify, firebase, gatsby, newrelic, express, serverless, docusaurus, stripe | [GitHub](https://github.com/websiteshot)
Never spend time again to create awesome screenshots of your websites.
Never spend time again to create awesome screenshots of your websites.
Zeebetron: How to Manage Multiple Zeebe Profiles with Electron
Year: 2020 | Tags: typescript, nodejs, angular, electron, camunda, zeebe | [GitHub](https://github.com/urbanisierung/zeebetron)
Learn how to use a simple Electron app to switch between different Zeebe profiles and communicate with various Zeebe brokers. This can save you time and hassle when working with Zeebe, a cloud-native workflow engine for microservice orchestration.
If you are using [Zeebe](https://camunda.com/platform/zeebe/) to automate your business processes, you may need to work with different Zeebe brokers depending on your project, environment, or client. However, switching between different Zeebe profiles can be tedious and error-prone, especially if you have to manually edit configuration files or environment variables.
That’s why I created a small Electron app that allows you to easily manage and switch between different Zeebe profiles. This app lets you create, edit, and delete Zeebe profiles, and automatically sets the appropriate environment variables for each profile. This way, you can communicate with any Zeebe broker without hassle.
In this article, I will show you how to use this app and how it can make your life easier when working with Zeebe. You will learn how to:
* Install and run the app
* Create and edit Zeebe profiles
* Switch between Zeebe profiles
* Test your connection to Zeebe brokers
Let’s get started!
Posts
Introducing Camunda Directory
Date: 2025-12-05 | Tags: camunda, directory, sources, articles, connectors, jobs
Your One-Stop Shop for Everything Camunda. Camunda Directory is the new hot category that combines process orchestration, automation, and AI capabilities for enterprise-wide automation.
## Why This Page Exists
### Camunda is Making Waves 🌊
Here’s the thing: Camunda was just named a Visionary in the [2025 Gartner® Magic Quadrant™ for Business Orchestration and Automation Technologies (BOAT)](https://page.camunda.com/wp-2025-gartner-magic-quadrant-for-boat). That’s not a typo - BOAT is the new hot category that combines process orchestration, automation, and AI capabilities for enterprise-wide automation.
When Gartner creates a new category and you’re already a Visionary in it? That’s like being invited to the cool kids’ table before anyone knew there was a cool kids’ table.
### The Problem We Solved
But here’s where it gets interesting. With all this growth comes… content. A lot of content.
- Official blog posts? Check.
- Forum discussions? Check.
- YouTube tutorials? Check.
- Academy courses? Check.
- GitHub repos? Check.
- Job postings? Check.
- Marketplace connectors? Check.
- ProductBoard ideas? Also check.
The problem? They were all living in different places. Want to find that one tutorial about Zeebe workers? Good luck remembering if it was a blog post, a YouTube video, or a forum thread.
We fixed that.
## Where the Data Comes From
Here’s the beautiful part - we’re not hiding anything. Every piece of content comes from publicly available sources:
All aggregated. All searchable. All in one place.
### The Secret Sauce: Machine Mode 🤖
Here’s a neat feature: Camunda Directory includes a machine mode that displays just the plain text content - no fancy styling, no distractions. Perfect for when you need to quickly copy information or get the essentials without the visual noise.
Want to integrate the directory data into your own systems or tools? Drop us an email at [email@u11g.com](mailto:email@u11g.com) and let’s talk.
## What You Can Actually Do With It
Let’s get practical. Here’s what Camunda Directory helps you do:
### ⚡ Super Fast Search
Type something. Anything. Results appear faster than your morning coffee brews. We’re talking instant filtering across 2,466 items from 10 different sources.
No more:
- “Which site was that article on again?”
- “I remember seeing a video about this…”
- “Let me check the forum… no wait, maybe it was on Medium?”
Just search. Find. Done.
### 📅 Stay Up to Date
New blog post? You’ll see it. Fresh forum thread? It’s there. New job opening? Don’t miss it.
Content is fetched daily from all sources. You’re always looking at the latest and greatest.
### 🔗 Cross-Source Discovery
This is where the magic happens. Search for “DMN” and you’ll find:
- The official documentation
- Community tutorials on Medium
- Forum discussions about edge cases
- YouTube walkthroughs
- Academy courses for certification
All from one search. All in one view.
## The Bottom Line
Whether you’re:
- A developer diving into Camunda for the first time
- A solution architect researching integration options
- A team lead scouting for training resources
- A job seeker looking for opportunities at Camunda
- Or just someone who hates having 47 browser tabs open
Camunda Directory has you covered.
2,466 resources. 10 categories. One search bar. Zero excuses.
## Try It Now
Stop reading. Start searching.
👉 [camunda.directory](https://camunda.directory)
P.S. - It’s free. Obviously.
Want to contribute or have feedback? Drop us an email - we’d love to hear from you!
Getting Started with AIME Directory Collections in 5 Minutes
Date: 2025-10-08 | Tags: ai, tutorial, mcp, github-copilot, vscode, workflow
A step-by-step guide to using AIME Directory's collection feature. Learn how to browse, collect, and export AI development resources to supercharge your workflow.
## The 5-Minute Setup
Here’s what we’re going to do:
1. Browse the directory and find what you need
2. Build your personal collection
3. Export everything as a ready-to-use project structure
4. Install it in your project
5. Start coding with enhanced AI assistance
Ready? Let’s go.
## Step 1: Find What You Need
Head to [aime.directory](https://aime.directory). You’ll land on the homepage showing recent additions across all categories. But let’s be more targeted.
Scenario: You’re starting a new TypeScript project and want to set up GitHub Copilot properly, add some useful MCP servers, and grab a few handy prompts.
Click on “MCPs” in the navigation. You’ll see over 800 Model Context Protocol servers. That’s a lot. Use the search bar at the top and type “github” to filter. You’ll see several options:
- GitHub MCP: Gives Claude access to your GitHub repositories
- Git MCP: Local git operations
- GitLab MCP: For GitLab users
Click on GitHub MCP to see the details. You’ll find:
- A description of what it does
- Installation command
- Configuration example
- Tags for easy discovery
See that “Add to Collection” button? Click it. Notice the collection icon in the header now shows “1” - you’ve added your first item.
Want to try a few more? Search for “sqlite” and add the SQLite MCP. Search for “memory” and add the Memory MCP (it gives Claude a persistent memory across conversations - super useful). That’s three MCPs in your collection.
## Step 2: Add Instructions and Prompts
Now click “Instructions” in the navigation. These are framework-specific guidelines that teach GitHub Copilot how to work with your tech stack.
Search for “typescript” and open the TypeScript Best Practices instruction. This file includes:
- Modern TypeScript patterns
- Type safety guidelines
- Common gotchas to avoid
- Project structure recommendations
Add it to your collection. Do the same for Node.js Development Standards if you’re building a backend.
Next, head to “Prompts”. Search for “code review” and add the Code Review Assistant prompt. This one’s a time-saver when you need to review pull requests.
## Step 3: Configure Your IDE
Click on “VSCode Configs” and browse the available presets. The Copilot Essentials config is featured for a reason - it’s a solid baseline that enables the most useful Copilot features without overwhelming you.
Add it to your collection. If you want to experiment with GitHub Copilot’s agent mode, also add Copilot Agent Mode Pro. The export will merge these configs intelligently, with later additions overriding earlier ones where there’s overlap.
## Step 4: Review and Export
Click the collection icon in the header (it should show several items now). You’ll see your Collection page with everything organized by type:
- MCPs: Your three server configurations
- Instructions: TypeScript and Node.js guidelines
- Prompts: Code review assistant
- VSCode Configs: Copilot settings
This is your chance to review. Made a mistake? Click the remove button on any item. Want to clear everything and start over? Use the “Clear All” button.
Happy with your collection? Click “Export ZIP”.
Your browser will download aime-collection-2025-10-08.zip. Let’s see what’s inside.
```bash
aime-collection-2025-10-08.zip
```
## Step 5: Understanding the Export
Extract the ZIP file and you’ll find this structure:
```bash
aime-collection-2025-10-08/
├── .vscode/
│ ├── mcp.json
│ └── settings.json
├── .github/
│ └── instructions/
│ ├── typescript-best-practices.instructions.md
│ └── nodejs-development-standards.instructions.md
└── prompts/
└── code-review-assistant.md
```
Let’s break down each part:
### .vscode/mcp.json
This file configures your MCP servers for Claude Desktop or compatible editors:
```json
{
"// NOTE": "Configure each MCP server according to its documentation",
"servers": {
"github": {
"type": "stdio",
"command": "npx -y @modelcontextprotocol/server-github",
"// repo": "https://github.com/modelcontextprotocol/server-github",
"// website": "https://github.com/modelcontextprotocol/server-github"
},
"sqlite": {
"type": "stdio",
"command": "npx -y @modelcontextprotocol/server-sqlite",
"// repo": "https://github.com/modelcontextprotocol/server-sqlite"
},
"memory": {
"type": "stdio",
"command": "npx -y @modelcontextprotocol/server-memory",
"// repo": "https://github.com/modelcontextprotocol/server-memory"
}
}
}
```
The configuration is ready to use. The commented lines provide quick reference to documentation without cluttering the actual config.
### .vscode/settings.json
Your VSCode settings, merged from all the configs you collected:
```json
{
"// Merged from": ["Copilot Essentials"],
"github.copilot.enable": {
"*": true,
"markdown": true,
"plaintext": false
},
"github.copilot.editor.enableAutoCompletions": true
// ... more settings
}
```
The merge strategy is smart: later configs override earlier ones, but objects are deeply merged rather than replaced. This means you can layer configs without losing individual settings.
### .github/instructions/*.instructions.md
GitHub Copilot automatically loads instruction files from this directory. Each file contains framework-specific guidance:
```markdown
# TypeScript Best Practices
- Always use strict mode
- Prefer interfaces over type aliases for object shapes
- Use const assertions for literal types
- ...
```
Drop this folder into your repo, commit it, and Copilot immediately understands your project’s conventions.
### prompts/*.md
Your saved prompts as markdown files:
```markdown
# Code Review Assistant
You are a code review expert. Analyze the following code changes for:
1. Potential bugs or edge cases
2. Performance implications
3. Security vulnerabilities
4. Code style and best practices ...
```
Copy these into your AI chat when needed, or integrate them with your IDE if it supports prompt files.
## Using Your Export
Now the easy part. Copy the extracted folders into your project:
```bash
# In your project directory
cp -r path/to/aime-collection-2025-10-08/.vscode .
cp -r path/to/aime-collection-2025-10-08/.github .
cp -r path/to/aime-collection-2025-10-08/prompts .
```
Or if you’re starting fresh, just extract the ZIP as your project template and build from there.
Commit these files to your repo:
```bash
git add .vscode .github prompts
git commit -m "Add AI development configuration from AIME Directory"
```
Now your entire team benefits. Anyone who clones the repo gets the same AI setup automatically.
## Pro Tips
**Start Small**: Don’t add 50 MCPs to your first collection. Start with 3-5 that you’ll actually use. You can always create new collections.
**Collection Sharing**: The collection is stored in your browser’s localStorage. If you want to share it with your team, export it, commit the files, and let them import by using the same structure.
**Experiment Freely**: The collection is just in your browser until you export. Add things, try them out, remove what doesn’t work. There’s no commitment until you click export.
**Read the Documentation**: Each item in the directory links to its source repository. If you need advanced configuration for an MCP or want to understand an instruction file better, click through and read the docs.
**Update Regularly**: The directory is continuously updated with new MCPs and improvements. Check back periodically and update your collection exports.
## Common Workflows
Let me share a few workflows I use regularly:
**The Quick Start**: For a new project, I export a basic collection (essential MCPs + framework instructions + VSCode config). Takes 2 minutes, gets me up and running immediately.
**The Experiment**: When I want to try new tools, I create a collection just for experimentation. Add several similar MCPs, export to a test project, and see which one fits best.
**The Team Template**: I maintain a shared collection for my team’s standard setup. When someone joins, they get the export and they’re immediately aligned with our tooling.
**The Learning Path**: For learning a new framework, I collect all relevant instructions and prompts. It’s like having a curated knowledge base that I can export and reference anytime.
## What If I Make a Mistake?
Don’t worry about it. Your collection is in your browser until you export. If you:
- Added the wrong item: Click remove on the collection page
- Want to start over: Click “Clear All”
- Exported too early: Just create a new collection and export again
- Need to modify: The exported files are just text files - edit them directly
There’s no database, no account, no permanent storage until you explicitly export. It’s designed to be forgiving.
## Next Steps
You now know how to use AIME Directory’s collection feature. You can:
✅ Browse and search for AI development resources ✅ Build a custom collection of tools and configurations ✅ Export everything as a ready-to-use project structure ✅ Install it in your projects in seconds
In my next post, I’ll show you how to contribute to AIME Directory. Found an amazing MCP that’s not listed? Created a useful instruction file for your framework? Want to share a prompt that saves you hours? I’ll walk through the contribution process.
Until then, go build your collection and see how much faster you can get set up on your next project. I think you’ll be surprised.
Happy collecting! 🎯
How to use a proxy in a nodejs environment
Date: 2024-12-08 | Tags: newsletter
How to use a proxy in a nodejs environment
There is an established standard by which proxies are configured. It runs via the following environment variables:
- `https_proxy`: Proxy for https traffic
- `http_proxy`: Proxy for http traffic
- `no_proxy`: URLs that should not run via a proxy.
```bash
https_proxy
http_proxy
no_proxy
```
The native fetch client of NodeJS does not offer any functionality for this out-of-the-box, but there is an agent from the undici http client that you can use:
```bash
fetch
undici
```
```javascript
import { EnvHttpProxyAgent } from "undici";
const ENV_HTTP_PROXY_AGENT = new EnvHttpProxyAgent();
const proxyAgent = { dispatcher: ENV_HTTP_PROXY_AGENT };
await fetch("https://...", { ...proxyAgent, });
```
The node type definition does not support a dispatcher attribute for fetch, but it’s a supported logic. So if you’re using TypeScript you can ignore the error or use the beloved `as any` pattern for the proxy agent.
```typescript
const proxyAgent = { dispatcher: ENV_HTTP_PROXY_AGENT } as any;
```
And that’s everything, no manual evaluation of the environment variables. Everything is handled by the `EnvHttpProxyAgent` from `undici`.
```bash
EnvHttpProxyAgent
undici
```
Introducing weeklyfoo
Date: 2024-01-28 | Tags: newsletter
Introducing weeklyfoo
For some time now, I have aggregated various newsletters that point me to interesting articles and tools. I read these for a long time and often took a closer look at various tools. However, I was usually unable to use any of them immediately. An opportunity would often arise and I would remember that there was an article that I had read. And of course I couldn’t find that article again. Great!
So the idea matured that I had to save the links in some form. But how? Bookmark managers have never worked for me, read-it-later tools have never made sense in my daily routines. So I started to pack the links into a weekly blog series. Nicely organized by category, tagged, and a short summary. What I really like: the data is in my repository, I have control over it at all times.
I wasn’t sure how long I would be able to keep it up. The first small mileage with 5 weeks at a time worked, then 10 and finally 15 weeks. Plus, it wasn’t really bothersome or annoying. I embedded reading, and in most cases catching up, into my daily routines, and the extra work with the weekly blog was hardly noticeable. I was quite happy with the content: good articles on web development and design, plus interesting tools. And so I wanted to go one step further: it might make sense to create my own newsletter from it.
It’s my first newsletter ever. Do I use a marketing platform? But which one? Do I really need all of it? Do I just build everything I need myself? Conclusion: I built it myself: the existing platforms are very powerful, but also quite expensive when it comes to many emails or subscribers. And I don’t need most of the features at all. But I wanted to be flexible and have an easy way to scale without multiplying costs. I will write another article about the tech stack itself, if it is of interest.
So here is my little newsletter: [https://weeklyfoo.com](https://weeklyfoo.com). Curated, lean, free. Let me know if you like it!
Create your own epaper calendar with Canvas
Date: 2023-10-20 | Tags: diy, epaper, calendar
Create your own epaper calendar with Canvas
todo
Codespaces can become a game changer
Date: 2023-10-13 | Tags: github codespaces
Codespaces is a remote container that can be configured for your needs and that includes VSCode to start implementing instantly.
TLDR: I’m very impressed about how easy it is to set up and to use!
[Codespaces](https://github.com/features/codespaces) is a remote container that can be configured for your needs and that includes VSCode to start implementing instantly. The nice part: you can start your apps remotely and get a remote port to check the result. Every runs remotely so the only requirement is a Browser (and of course a Github account).
By creating a `devcontainer.json` in a folder called `.devcontainer` you can customize the created container per repository.
```
devcontainer.json
.devcontainer
```
Here’s a simple config I used for a frontend react application:
```json
{
"name": "Default Linux Universal",
"image": "mcr.microsoft.com/devcontainers/universal:2-linux",
"updateContentCommand": "pnpm i",
"customizations": {
"vscode": {
"extensions": [
"esbenp.prettier-vscode",
"dbaeumer.vscode-eslint",
"oderwat.indent-rainbow",
"dracula-theme.theme-dracula",
"ms-vsliveshare.vsliveshare"
],
"settings": {
"workbench.colorTheme": "Dracula"
}
}
},
"features": {
"ghcr.io/devcontainers/features/node": "18.18.0"
}
}
```
As you can see you can auto customize VSCode as well:
- Install extensions under `customizations.vscode.extensions`
- Configure settings, like a theme under `customizations.vscode.settings`
Personally I prefer to use a locally configured VSCode, but I see some use cases where it’s a super nice use case:
- You want to onboard someone to your code base easily
- I want to change something without my own machine with me
- Non-engineering people can easily change things by their own (like css, i18n texts, …)
In all cases the user does not have to set up anything locally, because in this case the Codespace provides a working environment including git, node and all dependencies. The use just needs to do the changes and can start the application to see the outcome.
Cloudflare pages direct upload with stable preview urls
Date: 2023-09-30 | Tags: cloudflare, typescript, nodejs, github actions
Cloudflare pages direct upload with stable preview urls. I switched all my projects to Monorepos this year, and I use Cloudflare Pages intensively.
I switched all my projects to Monorepos this year, and I use Cloudlfare Pages intensively for hosting static websites. There is one small problem with Cloudlfare’s Github integration: you can only connect one project per repository. In a monorepo where I provide pages like a landing page, documentation and an app, this is a problem.
It’s good that you can also upload the assets directly. The problem with that: you lose some nice benefits:
- Stable preview URL
- PR comment with the links
And those are already quite nice benefits ;) So I started to rebuild the benefits myself. With the help of Wrangler and the Cloudflare API it is not difficult to achieve everything.
To get a stable URL I originally assumed that I would just get an updated stable URL with the help of the branch name.
```bash
npm i -g wrangler
cd ${{ env.ROOT_DIRECTORY }}
CF_PUBLISH_OUTPUT=$(wrangler pages deploy ${{ env.DIST_DIRECTORY }} --project-name=${{ env.CLOUDFLARE_PAGES_PROJECT_NAME }} --branch="${{ steps.extract_branch.outputs.branch }}" --commit-dirty=true --commit-hash=${{ steps.meta.outputs.sha_short }} | grep complete)
echo "cf_deployments=$CF_PUBLISH_OUTPUT" >> "$GITHUB_OUTPUT"
```
Unfortunately, after a few test runs, I found that this is not the case. I didn’t deal with it further at this point, but tried to take an alternative approach:
- Search all deployments to a branch on every run.
- Delete all deployments
- Upload new assets
For reading and deleting deployments I wrote a small TypesScript program that I run in the CI pipeline.
Read out all previous branch deployments:
```typescript
public async getDeployments(options?: { branch?: string }) {
const { branch } = options || {}
const { accountId, projectName, apiToken } = this.config
const response = await fetch( `https://api.cloudflare.com/client/v4/accounts/${accountId}/pages/projects/${projectName}/deployments`, {
headers: { Authorization: `Bearer ${apiToken}`, },
}, ).then((res) => res.json())
let deployments = response.result
if (branch) {
deployments = deployments.filter( (deployment) => deployment.deployment_trigger?.metadata?.branch === branch, )
}
return deployments
}
```
Deleting a deployment:
```typescript
public async deleteDeployment(id: string) {
const { accountId, projectName, apiToken } = this.config
await fetch( `https://api.cloudflare.com/client/v4/accounts/${accountId}/pages/projects/${projectName}/deployments/${id}?force=true`, {
method: 'DELETE',
headers: { Authorization: `Bearer ${apiToken}`, },
}, ).then((res) => res.json())
}
```
The approach has another advantage: deployments that are no longer current are always cleaned up, since I am no longer interested in them anyway.
Building your own Twitter Thread Generator
Date: 2023-03-05 | Tags: typescript, nodejs, beginners, tutorial
That's a twitter thread made with flethy. Setup in less than 5 minutes. Create a thread with a single command. Don't trust any 3rd party service.
To best explain the process to you, I will briefly show you the payload that you can use to start the process:
```json
{
"input": {
"thread": [
"That's a twitter thread made with flethy.",
"Setup in less than 5 minutes.",
"Create a thread with a single command.",
"Don't trust any 3rd party service."
]
}
}
```
The process essentially does the following:
1. Create a tweet with the first element from the array. Here: That's a twitter thread made with flethy.
2. Write a new variable called counter and initialise it with the value 1. This gives us a condition for the loop we are going to run through.
3. If the array contains more than one element go to the next node reply.
4. Create a tweet as a reply to the last tweet and increment the counter.
5. If the counter is less than the number of elements, execute the node reply again, otherwise we are done.
```
That's a twitter thread made with flethy.
counter
1
reply
reply
```
Actually quite simple!
You can now easily start this flow with the typescript package `@flethy/flow`:
```bash
import { FlowEngine } from "@flethy/flow";
const flow = {}; // flow from above
const input = {
input: {
thread: []
}
}; // input from above
async function main() {
const engine = new FlowEngine({
env: {
env: {},
secrets: {
CONSUMER_KEY: process.env.CONSUMER_KEY,
CONSUMER_SECRET: process.env.CONSUMER_SECRET,
ACCESS_TOKEN: process.env.ACCESS_TOKEN,
ACCESS_TOKEN_SECRET: process.env.ACCESS_TOKEN_SECRET,
},
},
flow,
input,
});
await engine.start();
}
main()
```
That’s all you need to create a Twitter thread locally in a Typescript project.
Of course, it is more comfortable in the cloud. You don’t have to set up anything, set any environment variables or write any code. The following video shows you how it works:
[Play](https://youtube.com/watch?v=b3TJK7PYQ58)
And at the end you’ll receive your tweet :)
I’ve implemented flethy so you don’t have to worry about doing repetitive tasks anymore. If you like the project, give it a star on [Github](https://github.com/flethy/flethy). Feel free to ask questions and give feedback! I’m looking forward to it!
And if you want to try out flethy Cloud: [https://flethy.com/signup](https://flethy.com/signup) - it’s free!
300 APIs integrated in minutes, not days
Date: 2023-01-30 | Tags: typescript, nodejs, beginners, tutorial, webdev, api
So now how about just deploying this description to the cloud, and it will be executed there, and you don’t have to worry about it yourself.
```typescript
const config = nao<Auth0.CreateUser>({
kind: "auth0.users.create",
"auth:Authorization": "token",
"subdomain:tenant": "tenant",
"body:email": "email",
"body:family_name": "last",
"body:given_name": "first",
});
```
The config object contains everything you need to fire the request against the API endpoint: method, url, headers, body. Now you can use your favorite http client to execute the real call (mine is fetch since it’s available in both envs: browser and server).
The flethy connectors package is fully typed: you don’t need to know how the payload is structured, what the URL for the specific use case is and which method has to be used.
Get started with flethy by just installing the package and then, yeah, select your favorite service to integrate!
```bash
npm i @flethy/connectors
# or
yarn add @flethy/connectors
# or
pnpm add @flethy/connectors
```
Try it out with [webhook.site](https://webhook.site):
```typescript
import { nao, WebhookSite } from "@flethy/connectors";
const config = nao<WebhookSite.CoreGet>({
kind: "webhooksite.core.get",
"param:uuid": "your-individual-uuid",
"header:x-test-header": "flethy",
});
console.log(config);
```
For me, that was not all. As a rule, it doesn’t stay with just one step. I usually have several steps that have to be executed partly sequentially and partly in parallel. So why not merge the configurations together into one flow? Let’s take the Auth0 Management API. I first need to get an access token to then interact with the API. What if it could then look like this?
```json
[
{
"id": "token",
"config": { "namespace": "token" },
"next": [ { "id": "createUser" } ],
"kind": "auth0.auth.accesstoken",
"body:audience": "==>env==>AUTH0_AUDIENCE",
"body:grant_type": "client_credentials",
"body:client_id": "==>secrets==>AUTH0_CLIENT_ID",
"body:client_secret": "==>secrets==>AUTH0_CLIENT_SECRET",
"subdomain:tenant": "==>env==>AUTH0_TENANT"
},
{
"id": "createUser",
"config": { "namespace": "createUser" },
"kind": "auth0.users.create",
"auth:Authorization": "->context.token.access_token->string",
"subdomain:tenant": "==>env==>AUTH0_TENANT",
"body:email": "->context.input.email->string",
"body:family_name": "->context.input.last->string",
"body:given_name": "->context.input.first->string"
}
]
```
Two steps are executed one after the other, the result from the first step is used in the second step. And as you can see, the description is a simple JSON. That means I’m programming language agnostic (what a word). Wohoo! So now how about just deploying this description to the cloud, and it will be executed there, and you don’t have to worry about it yourself. And that’s exactly what I’m working on right now. A first version is ready, a little fine-tuning is missing and then we can start.
The nice thing for me about this approach is that I can try everything out locally without any further dependencies before I deploy it to the cloud.
I’m looking forward to your feedback! And if you want to stay up to date, sign up at [flethy.com](https://flethy.com) and get regular news! And write to me if you are missing an integration!
Be an orchestration hero
Date: 2023-01-30 | Tags: nocode, camunda, cloud, tutorial
I’m going to show you how you can easily integrate with 10 services without writing a single line of code using Camunda 8 and the recently introduced Connectors.
## No line of code
If you are a developer, this won’t make you sweat. But what if you don’t have a developer background? Do you need to request resources from developers? For my case described above, the answer is clearly no! I’m going to show you how you can easily integrate with 10 services without writing a single line of code using Camunda 8 and the recently introduced Connectors.
That’s the process I’m going to model in this article. The nice thing is that it is not a pure theoretical workflow. This workflow is deployed in Camunda SaaS and is started as soon as someone registers on the corresponding website with their email address. The workflow sends an email with all necessary information, which I showed in my talk at [CamundaCon](https://camundacon.com) and decribed here.
## A little basic knowledge
If you already have experience with Camunda 8, then you know that a Service Worker is required to execute a Service Task. Camunda 8 Connectors eliminate the custom development of service workers for certain use cases. So if you want to call a service with a RESTful API, you just use the generic REST Connector, which only needs some configuration: the URL of the API, the request method, authorization, and parameters.
It’s very easy to configure a task: just choose the associated Connector instead of a service task. Currently there is a REST connector, a SendGrid connector and a Slack connector available.
To connect a service you have to do three things:
1. You usually need to have an account with the service provider.
2. You need to get the credentials for the API authentication.
3. You need to read the documentation of the service provider, so that you know which request you need to execute.
On Camunda’s side, you will of course need a cluster, credentials in Secrets and the modeled diagram.
Secrets were introduced together with Connectors. They are used to keep the keys for the APIs in a safe place. Thus, no credentials need to be stored in your BPMN diagram, instead references are used. The creation is easily done via the cluster details in the Cloud Console.
## Ready, Steady, Go!
After we have discussed the basic technique it is (finally) time to start with the actual integration. To warm up, let’s start with a few simple GET requests.
### A cat picture must not be missing
Our email should contain a cat picture. In order to not have the same cat picture in every email we use The Cat API to get a random cat image. The following parameters are needed:
- Task: REST Connector (No Auth)
- Request Method: GET
- URL: `https://api.thecatapi.com/v1/images/search`
The API provides us with an answer that looks like this:
```json
[ { "id": "dmr", "url": "https://cdn2.thecatapi.com/images/dmr.jpg", "width": 640, "height": 512 } ]
```
As we are only interested in the URL use the following Result Expression:
```json
{ catImage: body[1].url }
```
With this expression the URL from the response is stored in the variable `catImage` on the process context.
Note: FEEL is used for the expression. Note that the first element is addressed with index 1, not 0.
### Variable contents
The email contains various texts and links. To avoid constantly touching the email template when data changes, we use Contentful to store and read this data. This also has the advantage that this data can be used in other places. And we have a very natural separation between design and marketing copy.
- Task: REST Connector (Bearer Auth)
- Request Method: POST
- URL: `https://graphql.contentful.com/content/v1/spaces/:spaceId`
- Authentication: `secrets.CONTENTFUL` (Bearer Token you can get from the Contentful Console)
The content model in Contentful for this blog post consists of a single collection email with the attributes id (short text), content (short text) and order (integer).
The semantics behind the attributes are as follows:
- id: An internal identifier used to identify the entries.
- content: The actual content, be it text or a link.
- order: I made life a bit easy for myself and just want to execute one request against Contentful. By sorting by order we can target a specific entry.
Contentful provides a GraphQL API. With the following query we can read all entries from my collection:
```json
{ query: "{ emailCollection(order: [order_ASC]) { items { content } } }" }
```
The response from Contentful will be stored on the process context with the following Result Expression:
```json
{ emailContent: { p1: body.data.emailCollection.items[1].content, p2: body.data.emailCollection.items[2].content, p3: body.data.emailCollection.items[3].content, p4: body.data.emailCollection.items[4].content, linkSlideDeck: body.data.emailCollection.items[5].content, linkGithub: body.data.emailCollection.items[6].content, linkBlog: body.data.emailCollection.items[7].content, linkApp: body.data.emailCollection.items[8].content } }
```
### Something that lets you smile (hopefully)
Not all the content is loaded from Contentful. We want to make the recipient laught and would like to use another API for that. The World Wide Web wouldn’t be the World Wide Web if there wasn’t a service for everything: there even is a ChuckNorris API!
- Task: REST Connector (No Auth)
- Request Method: GET
- URL: `https://api.chucknorris.io/jokes/random`
In the request we can specify from which category the jokes should be randomly selected. For this we use the following expression as query parameter:
```json
{ category: "science" }
```
We put the joke back on into the process context for using it later on in the email:
```json
{ chuckNorrisJoke: body.value }
```
I think we’re warmed up now! We ran the first requests, essentially to get data to put into the final email. Next, we’ll interact with services that run in the background unnoticed by our users.
### A new entry in an Airtable
I don’t think I need to say much about Airtable itself. In this section we will create a new entry in a Base.
- Task: REST Connector (Bearer Auth)
- Request Method: POST
- URL: `https://api.airtable.com/v0/:appId/:tableId`
- Authentication: `secrets.AIRTABLE` (Bearer Token you can get from the Airtable App)
The table consists of the following columns: name, email and status. In a real world example, a team would work with Airtable to assess how much support a user needs to get the best onboarding experience. Using the API, multiple records can be added at once. For our example, we want to create exactly one new record and therefor configure the following payload:
```json
{ records: [ { fields: { Name: data.name, email: data.email, Status: "Todo" } } ] }
```
As we don’t need the response on the process context, there is no need to define a Result Expression. We are already done with the configuration of Airtable!
### Create a new Task in Trello
The Trello example has a similar background as the Airtable example just described. In the context of an onboarding, a task has to be created on a board so that an employee takes care of the user. This configuration is not complicated either:
- Task: REST Connector (No Auth)
- Request Method: POST
- URL: `https://api.trello.com/1/cards`
In doing so, we would like to add a new task on the board ccon22. The authentication works with the Trello API via query parameters, which look like this:
```json
{ key: "secrets.TRELLO_KEY", token: "secrets.TRELLO_TOKEN", idList: "62ff3b5d7651bd19ae07d45c", name: "Hi, "+data.name+"!" }
```
Two hints at this point: You can also store secrets in query parameters, pay attention to the apostrophes, so that they are resolved correctly. Furthermore the API expects an ID for the list where the new task should be added. You can get this ID by appending a .json to the URL in the browser. You will see a JSON representation of the list and can pick the ID.
### Plan for Marketing E-Mails
The email we would like to send is a transactional mail. In the future, however, marketing emails may also be sent if a user has consented. For this we integrate Hubspot and create a new contact. By using hubspot marketing emails can be sent quite easily in the future.
- Task: REST Connector (Bearer Auth)
- Request Method: POST
- URL: `https://api.hubapi.com/contacts/v1/contact/createOrUpdate/email/:email`
- Authentication: `secrets.HUBSPOT` (Bearer Token you can get from the Hubspot Settings)
In this API request, the email is part of the URL. This can be implemented with a FEEL expression using the following expression:
```
"https://api.hubapi.com/contacts/v1/contact/createOrUpdate/email/"+data.email
```
In addition to the email address, the name should be added to the contact. Nothing simpler than that! We can add any (but existing) attributes as properties in the payload:
```json
{ properties: [ { property: "firstname", value: data.name } ] }
```
### Get metrics from the beginning
It’s great when your product is perceived to be well received. But (subjective) impressions are not hard facts. For this reason, all relevant events should be tracked so that numbers are available to work with. This is what we will do now: When the onboarding process starts, a corresponding event is sent to Mixpanel. Finally, Mixpanel can be used to correlate, visualize, and evaluate the events with other events.
In Mixpanel we use the Import API, which allows authentication via Basic Auth. Other API requests expect the credentials in the payload, in which currently in Camunda SaaS no secrets can be resolved.
- Task: REST Connector (Basic Auth)
- Request Method: POST
- URL: `https://api-eu.mixpanel.com/import`
- Authentication: `"secrets.MIXPANEL_USERNAME"` and `"secrets.MIXPANEL_SECRET"`
The event payload looks as follows:
```json
[ { event: "ccon22", properties: { time: data.now, $insert_id: data.id, distinct_id: data.email, name: data.name } } ]
```
Wow, what a ride! We have already integrated seven services. But we’re not at the end yet!
### Push data to the IPFS network
Blockchain, web3, crypto, there is no way to avoid these terms in the tech scene at the moment. We will also use decentralized infrastructure to make the relevant information available on the IPFS network. The content hash behind the data will be used to resolve the data on the associated website so that the info is not exclusively available in an email. Regular HTTP gateways also exist for web3 technologies. Using the API of Web3Storage we can easily upload data.
- Task: REST Connector (Bearer Auth)
- Request Method: POST
- URL: `https://api.web3.storage/upload`
- Authentication: `secrets.WEB3STORAGE`
We use the data we already received from TheCatAPI, Contentful and the ChuckNorris API and create the following payload:
```json
{ content: { name: data.name, p1: emailContent.p1, p2: emailContent.p2, p3: emailContent.p3, p4: emailContent.p4, linkSlideDeck: emailContent.linkSlideDeck, linkGithub: emailContent.linkGithub, linkBlog: emailContent.linkBlog, chuckNorrisJoke: chuckNorrisJoke, cat: catImage } }
```
We map the hash from the response back to a variable on the process context:
```json
{ ipfsHash: body.cid }
```
The hash can be resolved via the gateway of ipfs.io, via `https://gateway.ipfs.io/ipfs/:hash`.
### It’s time to send the email
By now we have given all the necessary requirements to send the email. We have a cat picture, a Chuck Norris joke, texts and links, and the recipient.
To send it, we use SendGrid. In SendGrid, it is easy to create a transactional template that contains variables. The variables are enriched by the payload (template data).
- Task: SendGrid Connector
- Authentication: `secrets.SENDGRID` (SendGrid Settings)
- Sender and Receiver
- Template Id
- Template Data
The template data is almost identical to that from the IPFS example. The only addition is that the IPFS hash is also passed as an attribute. The complete mapping are as follows:
```json
{ name: data.name, p1: emailContent.p1, p2: emailContent.p2, p3: emailContent.p3, p4: emailContent.p4, linkSlideDeck: emailContent.linkSlideDeck, linkGithub: emailContent.linkGithub, linkBlog: emailContent.linkBlog, linkApp: emailContent.linkApp, ipfsHash: ipfsHash, chuckNorrisJoke: chuckNorrisJoke, cat: catImage }
```
### Notify the team
Not all employees in a company have access to every tool, and that’s fine. It’s not necessary for everyone to have access to all Hubspot contacts, or to see all Trello boards. Having the information about the arrival of a new customer is something noteworthy for everyone though. It’s hard to imagine companies without messengers. Usually, all employees of a company use the same messenger service. For this example, we’ll send an event to a Slack channel to draw attention to a new user.
The prerequisite is that a Slack app exists and an incoming webhook is set up for a channel. The connector can be configured with the webhook URL:
- Task: REST Connector (No Auth)
- Request Method: POST
- URL: `https://hooks.slack.com/services/:webhookid` or `"secrets.SLACK"`
The payload can be used to set how the message should be displayed in Slack. We put together a fairly simple variant and send the following:
```json
{ blocks: [ { type: "section", text: { type: "mrkdwn", text: "Welcome, "+data.name+"!" } } ] }
```
We did it!!! 10 services integrated, not one line of code. One last step is missing: we need to merge all nodes.
The website I built as an input channel starts the process, which in turn goes through all the steps described.
I hope I could convince you that you can use Camunda SaaS without writing code. Connectors and Secrets provides the basis for this. Most services offer a RESTful API that can be integrated using the generic REST connector.
Cut a few braids - new NPM package
Date: 2022-05-28 | Tags: typescript, npm, webdev, javascript
Cut a few braids - new NPM package. web3nao http-configs is a zero-dependency library that provides http configs for a number of web3 (and web2) APIs.
Hi all. I’m currently working on a new NPM package that is supposed to be an abstraction layer for API endpoints. It would be awesome if one or the other could take a look at it and give feedback if this is a useful package or rather going in a wrong direction.
[web3nao http-configs](https://www.npmjs.com/package/@web3nao/http-configs) is a zero-dependency library that provides http configs for a number of web3 (and web2) APIs in a simple way. The whole library is fully typed and gives easy access to the included APIs.
Motivated to implement for a simple reason: less dependencies in your own projects. What I have done so far:
1. I want to use a service.
2. I’m looking for a corresponding SDK or a suitable package that simplifies the integration with.
3. Profit
What I haven’t done: examine each package for their deployed dependencies. The effect is that my own application gets unnecessarily many dependencies, and strictly speaking I can’t be completely sure what happens inside the library. Of course, this also opens up some attack vectors.
With web3nao you don’t get any additional dependency. It just contains an easy to use and typed API to use services. It is ultimately up to the user which http client to use (got, axios, fetch, …). The config is mapped to the config of the http client and done.
```bash
got
axios
fetch
```
A significant advantage in my opinion are the provided interfaces. If I connect a new API, or connect an API that I already know but haven’t used for a while, I always have to invest time to find out how to use the API: Authorization, headers, payload, paths, … The types in web3nao make the integration much more effective and efficient, because less mistakes happen and you get the expected result faster.
[Play](https://youtube.com/watch?v=icKIxm2hwPI)
I appreciate your feedback!
ETHme - your chic web3 identity
Date: 2021-12-24 | Tags: web3, identity, ens, ethereum
ETHme - your chic web3 identity. A web3 profile page, with which I can enrich my data without gas costs. Comparable with bio.link or linktree.
Everybody knows [ENS domains](https://ens.domains/). And everyone knows that you can also store text records (hopefully everyone knows it!). With those data dApps can easily enrich wallets. But what was missing for me: a web3 profile page, with which I can enrich my data without gas costs. Comparable with bio.link or linktree. Biggest difference: decentralized, and it’s your data and not mine.
At first I asked myself the question: why do you actually need this? The answer was (at least for me) quite simple: identity is probably the most important asset, I want to have that completely under my control. Now before everyone starts screaming: sure, that ship has sailed. We’re all on Twitter or LinkedIn. But well, you have to start somewhere! So why not now. And of course everyone can create their own page and then store the IPFS content hash under their ENS domain. But even for that you need some basic technical knowledge. Why not create a simple way to change text or URLs without gas costs. Here we go.
So how does the data get to the profile page? There are exactly two sources: ENS and IPFS. The ENS text records are available anyway and are read and displayed accordingly. The IPFS data is used for additional enrichment. You can set what you want with it. It is also possible to set an avatar and a header image.
This is the short story behind [[ethme](https://ethme.at)](https://ethme.at). I will also write a blog post in the next days, how ethme is technically implemented.
Have fun!
btw: just add your ETH address or ENS domain to see your profile! mine is: [https://ethme.at/urbanisierung.eth](https://ethme.at/urbanisierung.eth)
Automatically update data and commit
Date: 2021-11-25 | Tags: github, node, typescript, github-actions
Automatically update data and commit using Github Actions. For a single page application, various data sources are tapped.
### My Workflow
I just wrote an article about Github Actions, but I don’t want to deprive you of this one! What is it about? For a single page application, various data sources are tapped. However, some data cannot be loaded directly from the application. For this reason I wrote a script that pulls, aggregates and formats the data.
In order for the data to be delivered with the application it must be committed into the repo. Then the regular CI pipeline runs, which builds and publishes the app:
The nice thing is that I don’t have to do anything else, because the Github action runs itself on a regular basis, and every time it commits to the main branch, the CI pipeline runs.
The application was about getting a POC up and running quickly to tap into data from various sources and prepare it accordingly.
The Github workflow consists of three main parts:
1. Setup
2. Execute script
3. Commit and push
```yaml
name: Update Polls and Execs
on:
schedule:
- cron: "5 18 * * 1"
jobs:
resources:
name: Update Polls and Execs
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1
- uses: actions/setup-node@v1
with:
node-version: 14
- run: npm install
- name: Run script to update data
run: npm run index
- name: Push data
uses: test-room-7/action-update-file@v1
with:
file-path: |
src/app/constants/polls.constants.ts
src/app/constants/proposals.constants.ts
commit-msg: chore(data) update polls and execs
github-token: ${{ secrets.GITHUB_TOKEN }}
```
The exciting thing about this action is the scheduled execution. Not everyone may be aware of this, but it can be used to map cron jobs that do their regular work.
### Additional Resources / Info
- The associated app can be found here: [https://delegates.makerlabs.one/](https://delegates.makerlabs.one/)
- Serves as a supplement to the MakerDAO Delegates program: [https://vote.makerdao.com/delegates](https://vote.makerdao.com/delegates)
- References:
- [Github Actions / Scheduled Events](https://docs.github.com/en/actions/learn-github-actions/events-that-trigger-workflows#scheduled-events)
- [Action to update files](https://github.com/test-room-7/action-update-file)
Have fun!
Aren't the standard actions going too far for you? Write your own one!
Date: 2021-11-25 | Tags: github, node, typescript, github-actions
Aren't the standard actions going too far for you? Write your own one! I’m currently working on a new NPM package that is supposed to be an abstraction layer for API endpoints.
### Motivation
Why do you build something like that? The reason is quite simple: everything that I have to do regularly and is essentially always the same, I automate. Tests run automated, linter checks run automated, the CI pipeline runs automated. So why not automate the screenshots as well? ;)
### My Workflow
My Github Action is essentially a small NodeJS app that ships as a Dockerfile and can be found in the Marketplace. It uses Github’s `@actions/core` [package](https://www.npmjs.com/package/@actions/core), which makes interacting with the infrastructure a breeze.
```bash
NodeJS
@actions/core
```
Those who have already implemented NodeJS applications will have no problems building their own Github Action. I would like to highlight a few things to make it even easier for others to get started.
You need an `action.yaml` which describes the action:
```yaml
name: "Websiteshot"
description: "Github Action to schedule a new Screenshot Job with Websiteshot."
branding:
icon: "camera"
color: "blue"
runs:
using: "docker"
image: "Dockerfile"
```
The associated Dockerfile contains a few labels that are necessary for the Marketplace:
```dockerfile
FROM node:slim
# A bunch of `LABEL` fields for GitHub to index
LABEL "com.github.actions.name"="Websiteshot"
LABEL "com.github.actions.description"="Github Action to schedule a new Screenshot Job with Websiteshot."
LABEL "com.github.actions.icon"="gear"
LABEL "com.github.actions.color"="blue"
LABEL "repository"="https://github.com/websiteshot/github-action"
LABEL "homepage"="https://websiteshot.app"
LABEL "maintainer"="Adam Urban <email@u11g.com>"
# Copy over project files
COPY . .
# Install dependencies
RUN npm install
# Build Project
RUN npm run build
# This is what GitHub will run
ENTRYPOINT ["node", "/dist/index.js"]
```
The app itself is quite manageable, because it uses the also existing NodeJS [package](https://www.npmjs.com/package/@websiteshot/nodejs-client) of websiteshot and creates new jobs with the service:
```typescript
import { Runner } from "./controller/runner.controller";
import { Validation } from "./controller/validation.controller";
const core = require("@actions/core");
async function run() {
try {
Validation.checkEnvVars();
const jobId: string = await Runner.run();
core.info(`Created Job with Id: ${jobId}`);
} catch (error) {
core.setFailed(error.message);
}
}
run();
```
In this code snippet you can see how the `@actions/core` package makes it very easy to end an action with an error or to write a log output.
But now to the workflow itself, which is also used by Websiteshot itself to generate new screenshots. After the CI pipeline has run, the last step is to start the Websiteshot action. You have to set a some environment variables that are used by the action.
```yaml
name: Publish
on: [push]
# ...
jobs:
create-screenshot:
runs-on: ubuntu-latest
name: "Create Screenshot via Template"
steps:
- uses: websiteshot/github-action@main
env:
PROJECT_ID: ${{ secrets.PROJECT_ID }}
API_KEY: ${{ secrets.API_KEY }}
TEMPLATE_ID: "abcdef-ghi..."
```
### My Workflow
- [Marketplace](https://github.com/marketplace/actions/websiteshot)
- [Repository](https://github.com/websiteshot/github-action)
- Eat your own dogfood: used to generate screenshots for documentation of Websiteshot: [https://docs.websiteshot.app/](https://docs.websiteshot.app/)
### Additional Resources / Info
- [Github Core NodeJS Package](https://www.npmjs.com/package/@actions/core)
### Disclaimer
While writing this post I noticed that it could be interpreted as an ad for Websiteshot. It’s not meant to be, it’s one of my side projects and I think the description of the action can help others or serve as inspiration to build your own action for your own use case.
Of course, it’s also possible to create all the screenshots within a Github action (without using an external service). All you need is a headless browser and you’re ready to go.
Have fun!
Screenshots - a perfect task to automate!
Date: 2021-02-14 | Tags: github, node, typescript, github-actions
Screenshots - a perfect task to automate! My Github Action is essentially a small NodeJS app that ships as a Dockerfile and can be found in the Marketplace.
## First iteration
Let’s start with the first iteration. This is the most minimal process that meets our requirements.
As you can see, the process consists of three very simple steps that correspond to the scenario described above:
1. Trigger screenshot job
2. Wait 60 seconds for the screenshots to be generated (Websiteshot doesn’t offer webhooks yet, but it’s on the roadmap)
3. Upload screenshots
Step 1 and 3 are service tasks, for each of which we again implement a simple worker. The second step is a timer event.
## Let’s build the workers
We need workers that interact with two services. What simplifies things enormously: both Websiteshot and AWS offer NodeJS SDKs that make integration very easy.
### Create screenshots
The worker is quite simple, as the actual screenshot configuration takes place within Websiteshot. Templates can be created there, which contain the parameterization and all URLs.
So that the service task can be used quite flexibly, we pass the TemplateId to be used as a service task header. With this approach we don’t have to touch the worker if we want to use different templates.
```typescript
export class WebsiteshotWorker {
constructor(private zeebeController: ZeebeController) {}
public create() {
this.zeebeController.getZeebeClient().createWorker({
taskType: Worker.WEBSITESHOT_CREATE_JOB,
taskHandler: async (job: any, complete: any, worker: any) => {
const templateId = job.customHeaders.templateid;
if (!templateId) {
complete.failure("Template Id not set as header <templateid>");
return;
}
logger.info(`Creating Screenshot Job for Template Id ${templateId}`);
const screenshotController = new ScreenshotController({
projectId: ConfigController.get( ConfigParameter.WEBSITESHOT_PROJECT_ID ),
apikey: ConfigController.get(ConfigParameter.WEBSITESHOT_API_KEY),
});
try {
const response = await screenshotController.create(templateId);
complete.success({ jobId: response.jobId });
} catch (error) {
logger.error(error);
complete.failure("Failed to create screenshot job via websiteshot");
}
},
});
}
}
```
Websiteshot integration is not worth mentioning with the Library:
```typescript
const response: CreateResponse = await this.websiteshotController.create({ templateId, });
```
### Upload created screenshots
After the first worker has started the screenshot job, the second worker takes care of the next steps:
- Fetch all created screenshots from Websiteshot.
- Download the files temporarily
- Upload the locally available files to S3
For this reason the worker is a bit more extensive:
```typescript
export class BucketWorker {
constructor(private zeebeController: ZeebeController) {}
public create() {
this.zeebeController.getZeebeClient().createWorker({
taskType: Worker.AWS_BUCKET_UPLOAD,
taskHandler: async (job: any, complete: any, worker: any) => {
const jobId = job.variables.jobId;
if (!jobId) {
complete.failure("Job Id not found on process context: <jobId>");
return;
}
const screenshotController = new ScreenshotController({
projectId: ConfigController.get( ConfigParameter.WEBSITESHOT_PROJECT_ID ),
apikey: ConfigController.get(ConfigParameter.WEBSITESHOT_API_KEY),
});
const bucketController = new BucketController(
{
id: ConfigController.get(ConfigParameter.AWS_SECRET_ID),
secret: ConfigController.get(ConfigParameter.AWS_SECRET_KEY),
},
ConfigController.get(ConfigParameter.AWS_BUCKET)
);
try {
const getResponse: GetResponse = await screenshotController.get( jobId );
const files: Array<{ url: string; name: string; }> = getResponse.jobs.map((screenshotJob) => {
return { url: screenshotJob.data, name: `${screenshotJob.url.name}.png`, };
});
files.forEach((file) => logger.info(`name: ${file.name}`));
const downloadPromises = files.map((file) => DownloadController.download(file.url, file.name) );
await Promise.all(downloadPromises);
logger.info(`Uploading Screenshots to Cloud Bucket`);
const uploadPromises = files.map((file) => bucketController.upload( Path.resolve(__dirname, `../..`, DOWNLOAD_FOLDER, file.name), file.name ) );
await Promise.all(uploadPromises);
complete.success({ screenshots: uploadPromises.length });
} catch (error) {
complete.failure("Failed to send slack message");
}
},
});
}
}
```
Let’s take the Worker apart a bit.
#### Which job?
As a parameter, the worker gets the JobId from the process context. The first worker has written the JobId returned from Websiteshot to the process context at the end. So easy game!
#### Which screenshots?
We are using the Websiteshot NodeJS client again for this. Easy peasy. Somehow it doesn’t get more sophisticated…
#### Intermediate step
In order for us to upload the screenshots to the cloud bucket we need to have them available. We take the easy way and save the screenshots temporarily before uploading them again. For this, we don’t need to do anything more than execute a few GET requests. In NodeJS this is done with a few lines of code :)
#### Finale Grande
This is the central task of the worker. The previous three steps were just the preparation for this step. But even this part is pretty manageable with the help of the AWS SDK.
Yikes, are we done already? Yes! In fact, with this process and the associated workers, we’ve done everything we need to take screenshots of pre-configured URLs.
## And now?
Now comes the concrete example: Camunda Cloud provides a console through which users can manage clusters and clients. Now I want to have screenshots taken from the Console using a test account. For this purpose I have created the following template:
I use the process shown above exactly the same way to deploy and run it in Camunda Cloud. To start a new instance you can use [Restzeebe](https://restzeebe.app) again. Once the workers are registered the service tasks are processed.
The results can be viewed via the Websiteshot Console:
And our screenshots end up in S3:
So, without further ado, in the last few minutes we built a process that automatically takes screenshots from the Cloud Console. I don’t need to mention that the URLs can be replaced quite easily. We can create as many other templates as we want and just reuse the same process. We just need to adjust the header parameter. Pretty cool I think!
You can also view, fork and modify the complete implementation in this repo: [https://github.com/websiteshot/camunda-cloud-example](https://github.com/websiteshot/camunda-cloud-example)
As with the last blog posts in this series: the process can easily be extended or the flow changed. For example, if you want to use the screenshots to automatically update the documentation, you can add an approval process. If you have read the tutorial with the Trello Cards you can for example create a new Trello Card on a specific board. A responsible person can then first look at the screenshots and either approve them for upload or reject them. In case of rejection, a specific message can be sent to a Slack channel because a view is not rendered correctly.
Another nice use case is the automated generation of social share images of conference speakers: at a conference there are many speakers who like to be announced via social media. Here, a template based on HTML and CSS can be parameterized so that only the parameters need to be changed. A process could eventually generate the social share images and publish them to various social media platforms. Create the template once and sit back!
Maybe this tutorial inspired you to automate the generation of your screenshots with the help of processes. If so, I look forward to your reports! And with the time gained, you can now take care of more important things, such as:
Let me know if the article was helpful! And if you like the content follow me on [Twitter](https://twitter.com/urbanisierung), [LinkedIn](https://www.linkedin.com/in/adamurban/) or [GitHub](https://github.com/urbanisierung) :)
Header Photo by [ShareGrid](https://unsplash.com/@sharegrid?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on [[Unsplash](https://unsplash.com/s/photos/message?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)](https://unsplash.com/s/photos/message?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText), last Photo by [Rémi Bertogliati](https://unsplash.com/@remi_b?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on Unsplash.
Send messages to Slack from Camunda Cloud
Date: 2021-02-13 | Tags: camunda, zeebe, slack, automation
Send messages to Slack from Camunda Cloud. Since we want to integrate with Slack you need a Slack Workspace.
## What is needed?
Since we want to integrate with Slack you need a Slack Workspace. Navigate to your [Slack Apps](https://api.slack.com/apps/) and create a new Slack app or use an existing one.
Enable Incoming Webhooks and add a new webhook to a channel.
## Implement Worker
The next step is to have a worker that sends messages to this Slack channel via the webhook that has been set up.
```typescript
import { IncomingWebhook } from "@slack/webhook";
import { ZeebeController } from "../zeebe.controller";
const SLACK_WEBHOOK_BASE = "https://hooks.slack.com/services";
export class SlackWorkerController {
private webhook: IncomingWebhook | null = null;
constructor(private zeebeController: ZeebeController) {}
public createWorker(taskType: string) {
this.zeebeController.getZeebeClient().createWorker({
taskType,
taskHandler: async (job: any, complete: any, worker: any) => {
const webhookid = job.customHeaders.webhookid;
const message = job.customHeaders.message;
const webhookurl = `${SLACK_WEBHOOK_BASE}/${webhookid}`;
this.webhook = new IncomingWebhook(webhookurl);
try {
await this.send(message);
complete.success();
} catch (error) {
complete.failure("Failed to send slack message");
}
},
});
}
private async send(message: string) {
const slackMessage = {
text: `🚀 ${message} 🚀`,
mrkdwn: true,
attachments: [ { title: `Greetings from Rest Zeebe!`, }, ],
};
if (this.webhook) {
await this.webhook.send(slackMessage);
} else {
throw new Error(`Failed to initialize Slack Webhook`);
}
}
}
```
The worker uses the official [Slack Node Client](https://www.npmjs.com/package/@slack/webhook) which makes integration a breeze. The parameters set are the webhookId and the message. This allows the worker to be used in different places with different webhooks. The message could alternatively come from the process context, but that depends on how you want to use the service task.
## A small process with the Slack Task
The process is pretty unspectacular. It gets exciting when you integrate this service task into a larger context.
Happy Slacking!
I’m quite curious if and how you guys use Slack for active notifications from outside :)
Let me know if the article was helpful! And if you like the content follow me on [Twitter](https://twitter.com/urbanisierung), [LinkedIn](https://www.linkedin.com/in/adamurban/) or [GitHub](https://github.com/urbanisierung) :)
Header Photo by [Jon Tyson](https://unsplash.com/@jontyson?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on [[Unsplash](https://unsplash.com/s/photos/message?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)](https://unsplash.com/s/photos/message?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText), last Photo by [Joan Gamell](https://unsplash.com/@gamell?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on Unsplash.
5 Steps how to track your Team's Mood
Date: 2020-12-12 | Tags: camunda, zeebe, automation, mood
5 Steps how to track your Team's Mood. A small backend is of course necessary to send and accept the mood images.
## A small backend is of course necessary
The backend should take care of sending and accepting the mood images. As channel I use a simple email. The email should contain five links that reflect the current mood. I chose email because everybody has an e-mail account. Be it in a private or business environment. No one has to register for a new service or install software for it.
I would like to use the following approach:
1. A new process instance starts daily.
2. A worker of a service task generates a random id for each participant and sends an email.
3. Links in the email point to a web application that sends an HTTP request to a backend service with the mood.
4. The backend service persists the data and checks beforehand if the request has already been submitted for the generated Id.
## #1 Design the process
Hardly worth mentioning, so simple and so valuable! A special feature is the start event. It is a timer event with the following configuration:
- Timer Definition Type: Cycle
- Timer Definition: R/P1D
With this configuration we tell the workflow engine to start a new instance every day.
## #2 Implement the worker
The worker should do two things:
1. Generate Id
2. Send email
I’ve implemented the following controller to manage the moods:
```typescript
import { v4 } from "uuid";
import { Document } from "../types/Document.type";
import { Mood, MoodRequest } from "../types/Mood.type";
import { User } from "../types/User.type";
import { Error, ErrorType } from "../utils/Error";
import { StorageController } from "./storage.controller";
export class MoodController {
constructor(private store: StorageController) {}
public createRequest(user: User) {
const now = new Date().getTime();
const moodRequest: MoodRequest = {
uuid: v4(),
team: user.team,
ts: now,
expiration: now + 1000 * 60 * 60 * 12,
};
return moodRequest;
}
public async saveMoodRequest(moodRequest: MoodRequest) {
await this.store.set(Document.MOOD_REQUEST, moodRequest.uuid, moodRequest);
}
public async setMood(moodRequestId: string, moodValue: number) {
const moodRequest: MoodRequest = await this.store.get( Document.MOOD_REQUEST, moodRequestId );
if (!moodRequest) {
throw new Error(ErrorType.NotFound, `Mood not found`);
}
const now = new Date().getTime();
if (moodRequest.expiration < now) {
this.store.delete(Document.MOOD_REQUEST, moodRequestId);
throw new Error(ErrorType.BadRequest, `Request expired`);
}
const mood: Mood = {
uuid: moodRequest.uuid,
mood: moodValue,
team: moodRequest.team,
ts: now,
requestTs: moodRequest.ts,
};
await Promise.all([
this.store.delete(Document.MOOD_REQUEST, moodRequestId),
this.store.set(Document.MOOD, mood.uuid, mood),
]);
}
}
```
Via `createRequest()` a new record is created with a random Id and the associated team. Then the record is stored in a database.
The worker uses this controller to create the request. Afterwards the email is sent via an SMTP server. For the sake of simplicity, the name, the email address and the associated team are set as parameters.
## #3 Setup a simple Backend
The backend is very simple, it accepts the request, checks if there is already a result for the id and if not it is persisted:
```typescript
const express = require("express");
import { NextFunction, Request, Response } from "express";
import { MoodController } from "../../controller/mood.controller";
import { StorageController } from "../../controller/storage.controller";
import { Error, ErrorType } from "../../utils/Error";
export class MoodRouter {
public router = express.Router({ mergeParams: true });
constructor(store: StorageController) {
this.router.post( "/:moodRequestId/:moodValue", async (req: Request, res: Response, next: NextFunction) => {
const moodRequestId: string = req.params.moodRequestId;
const moodValue: number = Number(req.params.moodValue);
try {
if (!moodRequestId || !moodValue) {
throw new Error( ErrorType.BadRequest, `Mandatory parameter missing` );
}
if (moodValue < 1 || moodValue > 5) {
throw new Error( ErrorType.BadRequest, `Mood ${moodValue} is not in range [1,5]` );
}
const moodController = new MoodController(store);
await moodController.setMood(moodRequestId, moodValue);
res.send();
} catch (error) {
next(error);
}
} );
}
}
```
## #4 Use a Frontend to route your request to the Backend
Here I don’t want to go into too much detail, it’s just for routing the request ;)
## #5 Start measuring!
[Play](https://youtube.com/watch?v=s04erEH4neM)
With a few simple steps a service is created that can track your team’s mood. Since the orchestrating component is a process, you can of course add more steps, or customize the flow to send an email only on certain days.
And now: relax! :)
Automate your manual tasks with Camunda and Trello!
Date: 2020-12-10 | Tags: camunda, zeebe, trello, automation
Automate your manual tasks with Camunda and Trello! Trello is a great tool to organize and collaborate on tasks.
## Why Trello?
[Trello](https://trello.com/) is a great tool to organize and collaborate on tasks. For example, there might be a Trello board, that is processed by several people with shared tasks.
## Setup
To use our example with Trello we need three things:
- Trello Account
- Trello API Key and Token (available at [https://trello.com/app-key](https://trello.com/app-key))
The API Key and the Token are necessary to communicate with the Trello API. For our example we want to implement two actions:
1. Create a new Trello Card.
2. Get notified when something changes on a Trello board.
## Create Trello Card
This task should of course be executed by a worker. The following controller takes care of the communication with the Trello API:
```typescript
import axios, { AxiosRequestConfig, AxiosResponse } from "axios";
import * as functions from "firebase-functions";
import { v4 } from "uuid";
import { Document } from "../types/Document.type";
import { StorageController } from "./storage.controller";
const BASEURL = "https://api.trello.com/1";
export enum TRELLO {
KEY = "key",
TOKEN = "token",
ID_LIST = "idList",
NAME = "name",
}
export enum ROUTE {
CARDS = "cards",
}
export class TrelloController {
private trelloKey: string;
private trelloToken: string;
constructor(private store: StorageController) {
this.trelloKey = functions.config().trello.key;
this.trelloToken = functions.config().trello.token;
}
public async storeWebhookPayload(payload: any) {
const uuid: string = v4();
await this.store.set(Document.TRELLO_WEBHOOK_PAYLOAD, uuid, payload);
}
public async addCard(idList: string, name: string): Promise<string> {
const queryParams: URLSearchParams = new URLSearchParams();
queryParams.append(TRELLO.ID_LIST, idList);
queryParams.append(TRELLO.NAME, name);
const result = await this.request("POST", ROUTE.CARDS, queryParams);
return result ? result.id : undefined;
}
private async request( method: "GET" | "POST" | "PATCH" | "DELETE", route: string, queryParams: URLSearchParams ) {
const params = queryParams;
params.append(TRELLO.KEY, this.trelloKey);
params.append(TRELLO.TOKEN, this.trelloToken);
const config: AxiosRequestConfig = {
method,
url:(`${BASEURL}/${route}`),
params,
};
try {
const result: AxiosResponse = await axios(config);
return result ? result.data : undefined;
} catch (error) {
console.error(error);
}
}
}
```
What is still missing is the integration of the controller into the worker:
```typescript
import { StorageController } from "../storage.controller";
import { TrelloController } from "../trello.controller";
import { ZeebeController } from "../zeebe.controller";
export class TrelloWorkerController {
constructor( private zeebeController: ZeebeController, private store: StorageController ) {}
public createWorker(taskType: "trelloAddCard") {
this.zeebeController.getZeebeClient().createWorker({
taskType,
taskHandler: async (job: any, complete: any, worker: any) => {
const idList = job.customHeaders.idlist;
const name = job.customHeaders.name;
const trelloController = new TrelloController(this.store);
try {
switch (taskType) {
case "trelloAddCard":
const id: string = await trelloController.addCard(idList, name);
complete.success({ id });
break;
default:
complete.failure(`Tasktype ${taskType} unknown`);
}
} catch (error) {
complete.failure("Failed to send slack message");
}
},
});
}
}
```
## Set up a Webhook
Our goal is to be notified when something changes on a board. If Trello cards are moved to Done, our process should be notified. For this purpose Trello offers [Webhooks](https://developer.atlassian.com/cloud/trello/guides/rest-api/webhooks/). All we have to do is to provide an HTTP endpoint which is called by Trello when something changes.
For this we provide the following endpoint:
```typescript
const express = require("express");
import { NextFunction, Request, Response } from "express";
import { StorageController } from "../../controller/storage.controller";
import { ZeebeController } from "../../controller/zeebe.controller";
import { TrelloBoardType } from "../../types/TrelloBoard.type";
import { Error, ErrorType } from "../../utils/Error";
export class TrelloWebhookRouter {
public router = express.Router({ mergeParams: true });
constructor(store: StorageController) {
this.router.post( "/", async (req: Request, res: Response, next: NextFunction) => {
const payload: TrelloBoardType = req.body as TrelloBoardType;
try {
if ( payload && payload.action && payload.action.type === "updateCard" && payload.action.data.listAfter.name === "Done" ) {
const id = payload.action.data.card.id;
const zeebeController = new ZeebeController();
await zeebeController.publishMessage(id, "Card done");
}
res.send();
} catch (error) {
throw new Error(ErrorType.Internal);
}
} );
this.router.get( "/", async (req: Request, res: Response, next: NextFunction) => {
res.send();
} );
}
}
```
Two special features to which we would like to respond:
We check whether a card has been changed: `payload.action.type === 'updateCard' &&`
And we check if the cards are on the Done list after the change: `payload.action.data.listAfter.name === "Done";`
The Id of the card that has changed is shown above: `const id = payload.action.data.card.id;`
We use this Id as CorrelationKey to the Message Event in the process, so that the correct instance reacts accordingly.
## Let’s model the process
It’s time to put all the pieces together! For this we model a process with a service task to create a new Trello card and a Message Event waiting for a Trello card to be completed.
You can see the whole thing in action here:
[Play](https://youtube.com/watch?v=FRNMKZAz-AM)
In the video two browser windows are arranged one below the other. In the upper window there is a tab with [Restzeebe](https://restzeebe.app) and Operate, in the lower window you can see the Trello Board that is used. The following happens:
1. Restzeebe: Starting a new process instance with the BPMN Process Id trello.
2. Trello Board: A new Trello Card is created with the title Nice!. So the worker has received a new task and created a new Trello Card via the Trello API accordingly.
3. Operate: A running process instance is visible, which waits in the Message Event.
4. Trello Board: We complete the Trello Card by moving it to the Done list.
5. Operate: The process instance is no longer in the Message Event, but is completed. The Trello Webhook signaled the change and our backend sent a message to the Workflow Engine.
## Now comes the wow-effect (hopefully)
Of course, the process is very simple, but it should only be the proof of concept. Since the Worker was implemented generically, we can configure lists freely. From the upper simple process we can model a process that sets up todos when a new employee signs his contract:
The worker shown above is only a very first iteration. It can of course become even more generic, so ideally someone who has nothing to do with the technical implementation can design and modify the process.
And of course I don’t have to mention that Trello is just an example. Trello can be replaced by any other task management tool that offers an API:
- Github Issues
- Jira
- Todoist
- Many others
I hope it helped you and you can re-use the use case in your context! I hope it helped you and you can re-use the use case in your context! I’m a big fan of automation so you have plenty of time for other things to put on your todo list ;)
Go beyond the basics
Date: 2020-12-08 | Tags: camunda, zeebe, bpmn
Go beyond the basics. In the first example the worker determines a random number and returns this number to the started instance.
## Random Number
In the first example the worker determines a random number and returns this number to the started instance. This number is written to the process context and will be checked in the following gateway whether the number is greater than 5 or not. Each example contains three actions that can be triggered:
1. deploy: Deploy the BPMN diagram to your cluster.
2. start: Start a new instance of the BPMN diagram.
3. worker: A worker registers for a few seconds to your cluster and executes the code.
Execute the first two steps and switch to Operate. With Operate you can see all deployed BPMN diagrams and completed/running instances. So after the second step a new instance has started and is waiting in the node Random Number. The process does not continue because a worker has to execute the corresponding task first. If you now let the worker run you will notice that the instance continues running after a short time and finally terminates.
The NodeJS implementation is very simple for this worker:
```typescript
const { ZBClient } = require("zeebe-node");
function createWorkerRandomNumber() {
// initialize node js client with camunda cloud API client
const zbc = new ZBClient({
camundaCloud: {
clientId: connectionInfo.clientId,
clientSecret: connectionInfo.clientSecret,
clusterId: connectionInfo.clusterId,
},
});
// create a worker with task type 'random-number'
zbc.createWorker({
taskType: "random-number",
taskHandler: async (job: any, complete: any, worker: any) => {
try {
const min = job.customHeaders.min && job.customHeaders.max ? Number(job.customHeaders.min) : 0;
const max = job.customHeaders.min && job.customHeaders.max ? Number(job.customHeaders.max) : 10;
const randomNumber = Math.floor(Math.random() * (max - min + 1) + min);
complete.success({ randomNumber, });
} catch (error) {
complete.failure(error);
}
},
});
}
```
The task type is configured in the attributes of a service task in the BPMN diagram:
The same applies to the gateway. In this case we want to attach the condition to a variable on the process context, which was set by the worker. The two outgoing paths of the gateway are configured as follows:
`# NO =randomNumber<=5` and `# YES =randomNumber>5`
There is nothing more to tell. But you see how easy it is to write a simple worker and use the result in the further process.
## Increase Number
The second example is also quite simple. It represents a simple loop. The corresponding worker implementation looks like this:
```typescript
const { ZBClient } = require("zeebe-node");
function createWorkerIncreaseNumber() {
const zbc = new ZBClient({
camundaCloud: {
clientId: connectionInfo.clientId,
clientSecret: connectionInfo.clientSecret,
clusterId: connectionInfo.clusterId,
},
});
zbc.createWorker({
taskType: "increase-number",
taskHandler: async (job: any, complete: any, worker: any) => {
const number = job.variables.number ? Number(job.variables.number) : 0;
const increase = job.customHeaders.increase ? Number(job.customHeaders.increase) : 1;
try {
const newNumber = number + increase;
complete.success({ number: newNumber, });
} catch (error) {
complete.failure(error);
}
},
});
}
```
The Worker is structured in the same way as the first example. The main difference is that it uses a value from the process context as input. This value is incremented at every execution. What can be seen too: the abort criterion is not part of the worker implementation. The worker should concentrate fully on his complex (haha) task: `i++;`.
The abort criterion is modeled in the process, and that is exactly where it belongs to. Because when we model processes, we want to be able to read the sequence from the diagram. In this case: When is the loop terminated?
## Webhook.site
This is my favorite example in this section. It shows a real use case by executing an HTTP request. To see the effect the service from [Webhook.site](https://webhook.site) is used for this. You will get an individual HTTP endpoint which you can use for that example. If a request is sent to the service you will see a new entry on the dashboard.
To make this example work with your individual Webhook.site the Webhook Id must be set accordingly. Below the start action you will find an input field where you can enter either your Id or your individual Webhook.Site URL. Restzeebe extracts the Id from the URL accordingly.
The underlying worker code now looks like this:
```typescript
import axios, { AxiosRequestConfig, AxiosResponse } from 'axios'
const { ZBClient } = require('zeebe-node')
function createWorkerRandomNumber() {
const zbc = new ZBClient({
camundaCloud: {
clientId: connectionInfo.clientId,
clientSecret: connectionInfo.clientSecret,
clusterId: connectionInfo.clusterId,
},
})
zbc.createWorker({
taskType: 'webhook',
taskHandler: async (job: any, complete: any, worker: any) => {
const webhookId = job.customHeaders.webhook ? job.customHeaders.webhook : job.variables.webhook
const method: 'GET' | 'POST' | 'DELETE' = job.customHeaders.method ? (String(job.customHeaders.method).toUpperCase() as | 'GET' | 'POST' | 'DELETE') : 'GET'
try {
if (!webhookId) {
throw new Error('Webhook Id not configured.')
}
if (!method || !['GET', 'POST', 'DELETE'].includes(method)) {
throw new Error( 'Method must be set and one of the following values: GET, POST, DELETE' )
}
const url = 'https://webhook.site/' + webhookId
const config: AxiosRequestConfig = { method, url, }
const response: AxiosResponse = await axios(config)
complete.success({ response: response.data ? response.data : undefined, })
} catch (error) {
complete.failure(error)
}
},
})
}
```
Under the hood, [Axios](https://github.com/axios/axios) is used to execute the HTTP request. The Worker is designed in a way that you can configure the HTTP method yourself. To do this, you must download the BPMN diagram, navigate to the Service Tasks Header parameters and set a different method.
I like this example for several reasons, but the most important one is: if you already have a microservice ecosystem and the services interact via REST it is a small step to orchestrate the microservices through a workflow engine.
## Challenge
Maybe you are curious now and want to get your hands dirty? Restzeebe offers a little challenge at the end. Again, no code is necessary, but you have to model, configure, deploy and start an instance by yourself. Camunda Cloud comes with an embedded modeler that you can use for this. I won’t tell you which task it is ;) But there is a [Highscore](https://restzeebe.app/highscore), where you can see how you compare to others ;)
Have fun!
Is there an alternative to spaghetti?
Date: 2020-12-08 | Tags: camunda, zeebe, bpmn, workflows
Is there an alternative to spaghetti? Can’t the spaghetti effect occur quickly as well when I design and execute processes?
## Minute 1
[Register for Camunda Cloud](https://accounts.cloud.camunda.io/signup?campaign=restzeebe). Fill out the registration form and confirm your email address.
## Minute 2
[Log in to the cloud console](https://camunda.io) and create your first cluster. Jump to the cluster details by clicking on the cluster.
## Minute 3
Create an API client: this is necessary to communicate with your cluster. You can see it as a key to your cluster. Without this key the door to your cluster will remain closed. Once you have created your client, you will see a dialog with your credentials. You also have the possibility to download a file that contains export statements. This file is the easiest way, because it bundles all information in one file.
## Minute 4
[Log in to Restzeebe.](https://restzeebe.app) You can answer a few questions about yourself that will help make the product better and then you’re ready to go.
## Minute 5
Import the created client. When you have just downloaded the file, you can enter the entire content into the input field. The necessary information will then be extracted. Alternatively, you can enter all the necessary data one by one: ClusterId, ClientId and ClientSecret.
With the import you have successfully completed the first achievement of Restzeebe.
## Minute 6
In the next step you interact with your cluster for the first time. Get the status of your cluster. If this action is successful, you have communicated with your cluster for the first time. So far it was relatively boring - you have so to speak given a ping and received a pong.
## Minutes 7 to 10
Now comes the exciting part :) Deploy the first model. Restzeebe deploys a simple first workflow consisting of a start and end event. In between there is an intermediate message event. This means that a started instance waits in the message node until a message arrives that matches the configured parameters.
Open Operate (the link is highlighted) to see your workflows (and instances). Since you have only deployed one workflow so far you will only see this entry on the dashboard.
Now start a new instance. Basically you can start any workflow with Restzeebe. You only need the BPMN Process Id. Since you have deployed the workflow described above it makes sense to start a new instance. The BPMN Process Id of the workflow can be found in the response of the deployment. You have to enter this Id in the input field.
If you now jump back to Operate and refresh the page you will see an active instance. This instance is waiting in the message node.
In the last step you can now send a message to your cluster. In the description of the action you will find an icon that prepopulates the input fields. Send the message and switch back to Operate. The instance should now be finished.
Congratulations, you have executed your first workflow in the cloud!
## Isn’t that a bit too easy?
Admittedly it is a very simple workflow. Maybe you are thinking now: Whoa, seriously? I wasted 20 minutes of my life for this? I can only tell you: this is just the beginning. It is your Hello World process.
As described at the beginning, everything is a process. It is certainly not reasonable to model a workflow for every use case and let it run through a workflow engine. But: there are enough examples where it makes more than sense.
I would like to finish this article with spaghetti. Can’t the spaghetti effect occur quickly as well when I design and execute processes? And the answer is very clear: yes. But the big difference from my point of view is that it is clearly visible. And that quickly leads to headaches ;)
I hope you did not expect an alternative recipe that plays in the league of Spaghetti Bolognese ;)
Play with node-canvas and build something useful
Date: 2020-01-12 | Tags: camunda, zeebe, bpmn, workflows
Play with node-canvas and build something useful. I built my first useful project with canvas.
## Canvas is not the problem, it’s math!
For developers the configurability of things is quite natural. And I wanted to leave different configurations open. Not many elements are needed with a Dot Calender: Circles and text. That’s it.
So to start, set up a node project and install canvas:
```bash
npm install canvas --save
```
To draw a circle you use arc:
```typescript
ctx.beginPath();
ctx.strokeStyle = this.properties.dots.dotStrikeColor;
ctx.lineWidth = this.properties.dots.dotLineWidth;
ctx.fillStyle = this.getFillColor(dotFlag);
ctx.arc(x, y, radius, 0, Math.PI * 2, true);
ctx.stroke();
ctx.fill();
ctx.closePath();
```
Adding a text is also very easy with `fillText()`.
The art of this lies in mathematics:
- Number of circles per month
- Radius of the circles depending on the base area
- Basically distances (between circles, between texts, …)
And there are some more configurations to consider. This is not about higher mathematics either, but the model has to be assembled nevertheless. To determine the x and y coordinates of the circles I used for example the following formula:
```typescript
const x = startX + (month * textDistance + month * columns * (radius * 2 + distanceBetweenCirclesX) + column * (radius * 2 + distanceBetweenCirclesX));
const y = startY + day * (radius * 2 + distanceBetweenCirclesY);
```
With the help of configuration files most of the parameters I need can be adjusted. I am quite proud of the results :)
Here you can find examples with different color schemes and different numbers of columns per month:
The whole project can be found [here](https://github.com/urbanisierung/dot-calendar).
I still have a few ideas in my head that I would like to implement, but for now it has served its purpose. And I built my first useful project with canvas ;)
How do I implement a command line tool?
Date: 2020-01-04 | Tags: cli, npm, typescript, nodejs
How do I implement a command line tool? I’m a software engineer, so I expect a direction in which a product should be developed.
## Define MVP
I’m a software engineer, so I expect a direction in which a product should be developed, and then I decide how the product will be implemented. In this case I am also the product owner.
Since I want to solve a personal problem it is relatively simple. I am the target group, so I decide what is the minimum that needs to be implemented before the product can be shipped. My requirements are clear, tasks can be derived from this:
- Command line tool that can be used globally
- Simple search for commands
- Easy extensibility of commands
## Select technology stack
In this area developers feel much more comfortable ;) Since I’ve been mostly in the node universe lately and npm is a widely used package manager available on Linux, MacOS and Windows systems, the decision was quite easy for me:
```bash
node
npm
```
- The logic is implemented in TypeScript and Node.
- The tool will be released on npmjs.
- The project is hosted as an open source project at GitHub.
## Implement core logic and CLI interface
I don’t want to go too deep into details, the project can be viewed at GitHub. There are a few things I want to highlight though, as I’m sure you’ll find them interesting:
[inquirer-autocomplete-prompt](https://www.npmjs.com/package/inquirer-autocomplete-prompt) is a very simple and nice input library that is well configurable. It allows you to search for an entry in an array and have it output to the terminal without any custom development.
Every command line tool has a version number, a help, can process parameters or flags if necessary. Nowadays nobody has to implement this himself. [meow](https://www.npmjs.com/package/meow) gives us a basic framework that only needs to be filled up.
## Provide first cheatsheets
This is necessary so that a user can use the tool immediately: no one wants to install a tool and then has to specify long configurations that are needed to use the tool. I personally need kubectl commands again and again, whose syntax I have to look up (unfortunately too often). Furthermore there are some zsh git shortcuts for which I have to look into the cheatsheet from time to time. So I’ve prepared something for them. As a bonus there are gitmojis ;)
## Publish
There is not much to tell about this point: an [npmjs](https://www.npmjs.com/) account is quickly set up, customize the project accordingly and run npm publish.
```bash
npm publish
```
## What do I get out of it now?
I personally have made my life a little bit easier. I now get commands much faster that I can’t remember. If there is another command that seems useful but is not used very often, I can simply add it to the list and that’s it.
I also have a simple blueprint that I can use to write more tools if I need them. And maybe it helps the one or other reader too ;)
Generative Art
Chaos Lines
Year: 2023 | Tags: Generative, Art, Creative Coding | [View](https://u11g.com/chaos-lines)
Lots of lines in a chaotic way. The lines are randomly positioned and the color is also random.
Lots of lines in a chaotic way. The lines are randomly positioned and the color is also random.
Circle in Circle
Year: 2023 | Tags: Generative, Art, Creative Coding | [View](https://u11g.com/circle-in-circle)
The idea behind this art piece is to create lots of circles in a circle. The circles are positioned randomly and the size is also random. The circles are filled with a random color.
The idea behind this art piece is to create lots of circles in a circle. The circles are positioned randomly and the size is also random. The circles are filled with a random color.
Crypdentity
Year: 2023 | Tags: Generative, Art, Creative Coding | [View](https://u11g.com/crypdentity)
Automatically outlined a person with a hoodie and created randomly lines around them.
Automatically outlined a person with a hoodie and created randomly lines around them.
Dots
Year: 2023 | Tags: Generative, Art, Creative Coding | [View](https://u11g.com/dots)
Randomly generated dots. It's a very simple art work but I like looking at it.
Randomly generated dots. It's a very simple art work but I like looking at it.
Daftpunk
Year: 2023 | Tags: Generative, Art, Creative Coding | [View](https://u11g.com/daftpunk)
An homage to Daft Punk. Automatically outlined a Daftpunk logo and created randomly lines around them.
An homage to Daft Punk. Automatically outlined a Daftpunk logo and created randomly lines around them.
Keyboard
Year: 2023 | Tags: Generative, Art, Creative Coding | [View](https://u11g.com/keyboard)
Automatically outlined a keyboard and created randomly lines around them.
Automatically outlined a keyboard and created randomly lines around them.
Fruits
Year: 2023 | Tags: Generative, Art, Creative Coding | [View](https://u11g.com/fruits)
Automatically outlined fruit and created randomly lines around them.
Automatically outlined fruit and created randomly lines around them.
Freedom
Year: 2023 | Tags: Generative, Art, Creative Coding | [View](https://u11g.com/freedom)
Automatically outlined two persons and created randomly lines around them.
Automatically outlined two persons and created randomly lines around them.
Magic Circle
Year: 2023 | Tags: Generative, Art, Creative Coding | [View](https://u11g.com/magic-circle)
I've seen this kind of art so many times. Time to implement it by my own and here's the result. I'm quite happy with the result.
I've seen this kind of art so many times. Time to implement it by my own and here's the result. I'm quite happy with the result.
Random Ellipse
Year: 2023 | Tags: Generative, Art, Creative Coding | [View](https://u11g.com/random-ellipse)
Randomly positioned ellipses with random colors and sizes.
Randomly positioned ellipses with random colors and sizes.
Random Freeform
Year: 2023 | Tags: Generative, Art, Creative Coding | [View](https://u11g.com/random-freeform)
Randomly positioned freeforms with random colors and sizes.
Randomly positioned freeforms with random colors and sizes.
Sand Storm
Year: 2023 | Tags: Generative, Art, Creative Coding | [View](https://u11g.com/sand-storm)
Different dots are randomly positioned and the color is also random.
Different dots are randomly positioned and the color is also random.
Sand Wand
Year: 2023 | Tags: Generative, Art, Creative Coding | [View](https://u11g.com/sand-wand)
Different dots are randomly positioned and the color is also random.
Different dots are randomly positioned and the color is also random.