CakeDC Blog

TIPS, INSIGHTS AND THE LATEST FROM THE EXPERTS BEHIND CAKEPHP

How to push Docker image to Container Registry and create App on DigitalOcean Platform Apps

The title speaks for itself, let’s jump right in!

As a preliminary step, we start from a user registered in DigitalOcean with a validated account.

Use the doctl tool for the entire communication process with DigitalOcean.

Step 1: Install doctl

$ cd ~
$ wget https://github.com/digitalocean/doctl/releases/download/v1.71.0/doctl-1.71.0-linux-amd64.tar.gz
$ tar xvf doctl-1.71.0-linux-amd64.tar.gz
$ sudo mv doctl /usr/local/bin

 

Step 2: Create an API token

 

Go to https://cloud.digitalocean.com/account/api/tokens

Generate new token for read and write and save apart the value of token generated

<TOKEN NAME>: personaltoken 
<TOKEN VALUE>: 6e981fc2a674dbb7a610b9b85d0c8b00

Step 3: Use the API token to grant account access to doctl

$ doctl auth init --context personaltoken
Validating token... OK

Prompt for <TOKEN VALUE>, then enter it and press return 
 

Step 4: Validate that doctl is working

$ doctl auth init 

Validating token... OK

 

Prompt for <TOKEN VALUE>, then enter it and press return 

Validate by obtaining the account information

$ doctl account get

Email      Droplet Limit    Email Verified    UUID         Status

[email protected]    10     true     5415bbf8-d501-4096-9b75-ab781c017948    active

 

Step 5: Create a Container Registry with doctl

<MY-REGISTRY-NAME> : container-nyc-795

<REGION> : nyc3

$ doctl registry create container-nyc-795 --region nyc3

Name       Endpoint        Region slug

container-nyc-795    registry.digitalocean.com/container-nyc-795    nyc3


Important: the region of the Container registry and Kubernetes cluster MUST be the same

Keep in mind that container names must be unique, must be lowercase, and only accepts alphanumeric characters and hyphens.

 

Step 6: Login to authenticate docker with your registry

$ doctl registry login
Logging Docker in to registry.digitalocean.com

 

Step 7: Create kubernetes cluster

$ doctl kubernetes cluster create cluster-static-example --region nyc3

Notice: Cluster is provisioning, waiting for cluster to be running

...................................................................

Notice: Cluster created, fetching credentials

Notice: Adding cluster credentials to kubeconfig file found in "/home/andres/.kube/config"

Notice: Setting current-context to do-nyc3-cluster-static-example

ID                                      Name                      Region    Version        Auto Upgrade    Status     Node Pools

d24f180b-6007-4dbc-a2fe-3952801570aa    cluster-static-example    nyc3      1.22.7-do.0    false           running    cluster-static-example-default-pool

Important: the region of the Container registry and Kubernetes cluster MUST be the same here as well. 

This operation isn’t a fast process.

 

Step 8: Integrate kubernetes cluster in Container register

$ doctl kubernetes cluster registry add cluster-static-example

 

Step 9: Get token certificate and connect to cluster

$ doctl kubernetes cluster kubeconfig save cluster-static-example

Notice: Adding cluster credentials to kubeconfig file found in "/home/andres/.kube/config"

Notice: Setting current-context to do-nyc3-cluster-static-example

To validate this, use the kubectl tool to get context. If is not installed, get the last version you find - for example in googleapis

$ wget https://storage.googleapis.com/kubernetes-release/release/v1.23.5/bin/linux/amd64/kubectl
$ chmod +x kubectl
$ sudo mv kubectl /usr/local/bin/

Check with

$ kubectl config current-context
do-nyc3-cluster-static-example

Here, you see: bash prompt do-ny3-cluster-sttic-example. That is the context you created in Step 7
 

Step 10: Generate docker image, tag and push to DigitalOcean

We assume that the user already has docker installed

$ mkdir myapp
$ mkdir myapp/html
$ nano myapp/Dockerfile

 

Inside Dockerfile put 

 

FROM nginx:latest
COPY ./html/hello.html /usr/share/nginx/html/hello.htm
l

 

We create simple Dockerfile with NGINX Server and copy the file hello.html in the default html of nginx

 

$ nano myapp/html/hello.html

 

Inside hello.html put

 

<!DOCTYPE html>

<html>

 <head>

   <title>Hello World!</title>

 </head>

 <body>

   <p>This is an example of a simple HTML page served from the Nginx container.</p>

 </body>

</html>

Then build a docker image file tag with repository and push with docker
 

$ cd myapp
$ docker build -t registry.digitalocean.com/container-nyc-795/static-app .

Sending build context to Docker daemon  3.584kB

Step 1/2 : FROM nginx:latest

latest: Pulling from library/nginx

c229119241af: Pull complete

2215908dc0a2: Pull complete

08c3cb2073f1: Pull complete

18f38162c0ce: Pull complete

10e2168f148a: Pull complete

c4ffe9532b5f: Pull complete

Digest: sha256:2275af0f20d71b293916f1958f8497f987b8d8fd8113df54635f2a5915002bf1

Status: Downloaded newer image for nginx:latest

 ---> 12766a6745ee

Step 2/2 : COPY ./html/hello.html /usr/share/nginx/html/hello.html

 ---> 2b9be913c377

Successfully built 2b9be913c377

Successfully tagged registry.digitalocean.com/container-nyc-795/static-app:latest

 

$ docker images

REPOSITORY                                               TAG       IMAGE ID       CREATED         SIZE

registry.digitalocean.com/container-nyc-795/static-app   latest    2b9be913c377   2 minutes ago   142MB

 

$ docker push registry.digitalocean.com/container-nyc-795/static-app

Using default tag: latest

The push refers to repository [registry.digitalocean.com/container-nyc-795/static-app]

ac03ae036a53: Pushed

ea4bc0cd4a93: Pushed

fac199a5a1a5: Pushed

5c77d760e1f4: Pushed

33cf1b723f65: Pushed

ea207a4854e7: Pushed

608f3a074261: Pushed

latest: digest: sha256:22615ad4c324ca5dc13fe2c3e1d2d801bd166165e3809f96ed6a96a2b2ca2748 size: 1777


Step 11: Create app

Create file example-static-app.yaml and insert:

alerts:                                 
- rule: DEPLOYMENT_FAILED               
- rule: DOMAIN_FAILED                   
name: example-static-app                
region: nyc                             
services:                               
- http_port: 80                         
  image:                                
    registry_type: DOCR                 
    repository: static-app              
    tag: latest                         
  instance_count: 2                     
  instance_size_slug: professional-xs   
  name: static-service                  
  routes:                               
  - path: /                             
  source_dir: /             
             

Validate file with

$  doctl apps spec validate example-static-app.yaml

Create app

$ doctl apps create --spec example-static-app.yaml
Notice: App created
ID        Spec Name             Default Ingress    Active Deployment ID    In Progress Deployment ID    Created At     Updated At
55a7cb68-65b7-4ff1-b6af-388cdb1df507    example-static-app   2022-03-30 09:34:01.288257225 +0000 UTC    2022-03-30 09:34:01.288257225 +0000 UTC

If you access https://cloud.digitalocean.com/apps

And in the live url + /hello.html you can see:

Step 12: Deploy app

If you modify the code of your app, you need to generate a new image with docker and push  (see step 10). Then you don’t need to create a new app, you need to deploy the image in the already created app, with the id executing the command to deploy in bash.

$  doctl apps create-deployment 55a7cb68-65b7-4ff1-b6af-388cdb1df507


 

How to generate deploy of Docker image to Container Registry on DigitalOcean Platform Apps

Once you have created the application (and if you have the code in gitlab), you can create a direct deployment of your code in the DigitalOcean container and deploy on top of your application.

Step 1: Define variables

First two variables are defined in gitlab. You can find these inside project in the left menu - enter in Settings > CI/CD > Variables

$DIGITALOCEAN_API_KEY = token generated in DigitalOcean dashboard

$APP_ID = previously generated application identifier

 

More can be defined as the name of the repository. The value of these variables will be injected into the file that we will create below in the Step 3

 

 

Step 2: Register runner in gitlab

In your project in the left menu, go to Settings > CI/CD > Runners

Create a specific runner for the project in the URL with registration token. Register GitLab Runner from the command line. It is important to use docker and privileged

# Download the binary for your system
sudo curl -L --output /usr/local/bin/gitlab-runner https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-linux-amd64

# Give it permission to execute
sudo chmod +x /usr/local/bin/gitlab-runner

# Register runner
sudo gitlab-runner register -n --url https://git.cakedc.com --registration-token GR1348941gx7sgV3pZFQgRqg5qUR_ --executor docker --description "My Docker Runner" --docker-image "docker:19.03.12" --docker-privileged --docker-volumes "/certs/client"

 

Step 3: Create file gitlab-ci.yml 

The gitlab-ci.yml file takes care of

  • Authentication and identification using doctl in DigitalOcean

  • Generating and sending the docker image to the DigitalOcean container

  • Deploying the container image to an existing app

Create the file gitlab-ci.yml in the root of your project as

image: docker:20-dind

variables:
  DOCKER_HOST: tcp://docker:2375
  DOCKER_DRIVER: overlay2
  DOCKER_TLS_CERTDIR: ""
  REPOSITORY_URL: registry.digitalocean.com/container-nyc-795/static-app
  CONTAINER_NAME: static-app

services:
  - name: docker:20-dind
    alias: docker
    command: ["--tls=false"]

before_script:
  - apk update
  - apk upgrade
  - apk add doctl --repository=http://dl-cdn.alpinelinux.org/alpine/edge/community
  - docker info

build:
  stage: build
  script:
    - doctl auth init --access-token $DIGITALOCEAN_API_KEY
    - doctl account get     
    - echo $DIGITALOCEAN_API_KEY | docker login -u $DIGITALOCEAN_API_KEY --password-stdin registry.digitalocean.com
    - docker build -t $REPOSITORY_URL:latest .
    - docker push $REPOSITORY_URL
    - doctl apps create-deployment $APP_ID

Now, every time you do a git push in your project, the runner will automatically inject the variables defined previously in the gitlab-ci.yml file. Then, it will generate a docker image with docker (docker-in-docker) to create an image of your project, send it to the digitalocean repository and deploy it in the app configured.

 

That’s it!

 

Latest articles

The Generational Perception of Work and Productivity in the Remote-Work Era

Generational Work Illustration

The Generational Perception of Work and Productivity in the Remote-Work Era

In the year 2020, everything changed when the world stopped completely during COVID-19. The perception of safety, health, mental health, work, and private life completely turned around and led to a different conception of the world we knew. As the global pandemic thrived, we saw how many jobs could be done from home, because people had to reinvent themselves as we were not able to go to our workplaces. And it settled a statement, changing the perception of work dramatically. Before it, and for older generations, work was associated with physical presence, rigid schedules, and productivity measured by visible hours. But after it, younger generations saw the potential of working from home or being a so-called digital nomad, giving more priority to flexibility, emotional well-being, and measuring efficiency through results. This change reflects a social evolution guided by new technologies, new expectations, and a more connected workforce. Remote work has been key in this transformation. For thousands of professionals, the ability to work from home meant reclaiming personal time, reducing stress, and achieving a healthier work--life balance (for example, by reducing commuting time most people get almost 2 extra hours of personal time). Productivity did not decrease --- in many cases, it actually improved --- because the focus shifted from "time spent" to "goals achieved." This model has also shown that trust and autonomy can lead to more engaged teams. However, despite all of the perks, many companies are apparently eager to return to traditional workplaces. Maybe it is the fear of losing control or a lack of understanding of the new work dynamics, but this tendency threatens to undo meaningful progress for generations that have already experienced the freedom and effectiveness of remote work. Going back to the old-fashioned way of work feels like a step backward. So now, the challenge is to find a middle ground that acknowledges the cultural and technological changes of our time, passing the torch to a new generation of workers. Because productivity is no longer measured by how many people are sitting in a chair, but by the value of the final results. And if we want organizations truly prepared for the future, we must listen to younger generations and build work models that prioritize both results and workers' well-being. In CakeDC we do believe in remote work! Proving through the years that work can be done remotely no matter the timezone or language.

Scaling Task Processing in CakePHP: Achieving Concurrency with Multiple...

This article is part of the CakeDC Advent Calendar 2025 (December 9th 2025)

Introduction: need of Concurrency

While offloading long-running tasks to an asynchronous queue solves the initial web request bottleneck, relying on a single queue worker introduces a new, serious point of failure and bottleneck. This single-threaded approach transfers the issue from the web server to the queue system itself.

Bottlenecks of Single-Worker Queue Processing

The fundamental limitation in the standard web request lifecycle is its synchronous, single-threaded architecture. This design mandates that a user's request must wait for all associated processing to fully complete before a response can be returned. The Problem: Single-Lane Processing Imagine a queue worker as a single cashier at a very busy bank . Each item in the queue (the "job") represents a customer.
  1. Job Blocking (The Long Transaction): If the single cashier encounters a customer with an extremely long or slow transaction (e.g., generating a massive report, bulk sending 100,000 emails, or waiting for a slow API), every other customer must wait for that transaction to complete.
  2. Queue Backlog Accumulation: New incoming jobs (customers) pile up rapidly in the queue. This is known as a queue backlog. The time between a job being put on the queue and it starting to execute (Job Latency) skyrockets.
  3. Real-Time Failure: If a job requires an action to happen now (like sending a password reset email), the backlog means that action is critically delayed, potentially breaking the user experience or application logic.
  4. Worker Vulnerability and Downtime: If this single worker crashes (due to a memory limit or unhandled error) or is temporarily taken offline for maintenance, queue processing stops entirely. The application suddenly loses its entire asynchronous capability until the worker is manually restarted, resulting in a complete system freeze of all background operations.
To eliminate this bottleneck, queue consumption must be handled by multiple concurrent workers, allowing the system to process many jobs simultaneously and ensuring no single slow job can paralyze the entire queue.

Improved System Throughput and Reliability with Multiple Workers

While introducing a queue solves the initial issue of synchronous blocking, scaling the queue consumption with multiple concurrent workers is what unlocks significant performance gains and reliability for the application's background processes.

Key Benefits of Multi-Worker Queue Consumption

  • Consistent, Low Latency: Multiple workers process jobs in parallel, preventing any single slow or heavy job (e.g., report generation) from causing a queue backlog. This ensures time-sensitive tasks, like password resets, are processed quickly, maintaining instant user feedback.
  • Enhanced Reliability and Resilience: If one worker crashes, the other workers instantly take over** the remaining jobs. This prevents a complete system freeze and ensures queue processing remains continuous.
  • Decoupling and Effortless Scaling: The queue facilitates decoupling. When background load increases, you simply deploy more CakePHP queue workers. This horizontal scaling is simple, cost-effective, and far more efficient than scaling the entire web server layer.

Workflows that Benefit from Multi-Worker Concurrency

These examples show why using multiple concurrent workers with the CakePHP Queue plugin (https://github.com/cakephp/queue) is essential for performance and reliability:
  • Mass Email Campaigns (Throughput): Workers process thousands of emails simultaneously, drastically cutting the time for large campaigns and ensuring the entire list is delivered fast.
  • Large Media Processing (Parallelism): Multiple workers handle concurrent user uploads or divide up thumbnail generation tasks. This speeds up content delivery by preventing one heavy image from blocking all others.
  • High-Volume API Synchronization (Consistency): Workers ensure that unpredictable external API latency from one service doesn't paralyze updates to another. This maintains a consistent, uninterrupted flow of data across all integrations.

The Job

Lets say that you have the queue job like this: <?php declare(strict_types=1); namespace App\Job; use Cake\Mailer\Mailer; use Cake\ORM\TableRegistry; use Cake\Queue\Job\JobInterface; use Cake\Queue\Job\Message; use Interop\Queue\Processor; /** * SendBatchNotification job */ class SendBatchNotificationJob implements JobInterface { /** * The maximum number of times the job may be attempted. * * @var int|null */ public static $maxAttempts = 10; /** * We need to set the shouldBeUnique to true to avoid race condition with multiple queue workers * * @var bool */ public static $shouldBeUnique = true; /** * Executes logic for SendBatchNotificationJob * * @param \Cake\Queue\Job\Message $message job message * @return string|null */ public function execute(Message $message): ?string { // 1. Retrieve job data from the message object $data = $message->getArgument('data'); $userId = $data['user_id'] ?? null; if (!$userId) { // Log error or skip, but return ACK to remove from queue return Processor::ACK; } try { // 2. Load user and prepare email $usersTable = TableRegistry::getTableLocator()->get('Users'); $user = $usersTable->get($userId); $mailer = new Mailer('default'); $mailer ->setTo($user->email) ->setSubject('Your batch update is complete!') ->setBodyString("Hello {$user->username}, \n\nThe recent batch process for your account has finished."); // 3. Send the email (I/O operation that can benefit from concurrency) $mailer->send(); } catch (\Exception $e) { // If the email server fails, we can tell the worker to try again later // The queue system will handle the delay and retry count. return Processor::REQUEUE; } // Success: Acknowledge the job to remove it from the queue return Processor::ACK; } } Setting $shouldBeUnique = true; in a CakePHP Queue Job class is crucial for preventing a race condition when multiple queue workers consume the same queue, as it ensures only one instance of the job is processed at any given time, thus avoiding duplicate execution or conflicting updates. In another part of the application you have code that enqueues the job like this: // In a Controller, Command, or Service Layer: use Cake\ORM\TableRegistry; use Cake\Queue\QueueManager; use App\Job\SendBatchNotificationJob; // Our new Job class // Find all users who need notification (e.g., 500 users) $usersToNotify = TableRegistry::getTableLocator()->get('Users')->find()->where(['is_notified' => false]); foreach ($usersToNotify as $user) { // Each loop iteration dispatches a distinct, lightweight job $data = [ 'user_id' => $user->id, ]; // Dispatch the job using the JobInterface class name QueueManager::push(SendBatchNotificationJob::class, $data); } // Result: 500 jobs are ready in the queue. By pushing 500 separate jobs, you allow 10, 20, or even 50 concurrent workers to pick up these small jobs and run the email sending logic in parallel, drastically reducing the total time it takes for all 500 users to receive their notification.

Implementing Concurrency with multiple queue workers

In modern Linux distributions, systemd is the preferred init and service manager. By leveraging User Sessions and the Lingering feature, we can run the CakePHP worker as a dedicated, managed service without needing root privileges for the process itself, offering excellent stability and integration.

SystemD User Sessions

Prerequisite: The Lingering User Session

For a service to run continuously in the background, even after the user logs out, we must enable the lingering feature for the user account that will run the workers (e.g., a service user named appuser). Enabling Lingering: Bash sudo loginctl enable-linger appuser This ensures the appuser's systemd user session remains active indefinitely, allowing the worker processes to survive server reboots and user logouts.

Creating the Systemd User Unit File

We define the worker service using a unit file, placed in the user's systemd configuration directory (~/.config/systemd/user/).
  • File Location: ~appuser/.config/systemd/user/[email protected]
  • Purpose of @: The @ symbol makes this a template unit. This allows us to use a single file to create multiple, distinct worker processes, which is key to achieving concurrency.
[email protected] Content: Ini, TOML [Unit] Description=CakePHP Queue Worker #%i After=network.target [Service] # We use the full path to the PHP executable ExecStart=/usr/bin/php /path/to/your/app/bin/cake queue worker # Set the current working directory to the application root WorkingDirectory=/path/to/your/app # Restart the worker if it fails (crashes, memory limit exceeded, etc.) Restart=always # Wait a few seconds before attempting a restart RestartSec=5 # Output logs to the systemd journal StandardOutput=journal StandardError=journal # Ensure permissions are correct and process runs as the user User=appuser [Install] WantedBy=default.target

Achieving Concurrency (Scaling the Workers)

Concurrency is achieved by enabling multiple instances of this service template, distinguished by the suffix provided in the instance name (e.g., -1, -2, -3). Reload and Start Instances: After creating the file, the user session must be reloaded, and the worker instances must be started and enabled: Reload Daemon (as appuser): Bash systemctl --user daemon-reload Start and Enable Concurrent Workers (as appuser): To run three workers concurrently: Bash # Start Worker Instance 1 systemctl --user enable --now [email protected] # Start Worker Instance 2 systemctl --user enable --now [email protected] # Start Worker Instance 3 systemctl --user enable --now [email protected] Result: The system now has three independent and managed processes running the bin/cake queue worker command, achieving a concurrent processing pool of three jobs.

Monitoring and Management

systemd provides powerful tools for managing and debugging the worker pool: Check Concurrency Status: Bash systemctl --user status 'cakephp-worker@*' This command displays the status of all concurrent worker instances, showing which are running or if any have failed and been automatically restarted. Viewing Worker Logs: All output is directed to the systemd journal: Bash journalctl --user -u 'cakephp-worker@*' -f This allows developers to inspect errors and task completion messages across all concurrent workers from a single, centralized log. Using systemd and lingering is highly advantageous as it eliminates the need for a third-party tool, integrates naturally with system logging, and provides reliable process management for a robust, concurrent task environment.

Summary

Shifting from a single worker to multiple concurrent workers is essential to prevent bottlenecks and system freezes caused by slow jobs, ensuring high reliability and low latency for asynchronous tasks. One robust way to achieve this concurrency in CakePHP applications is by using Systemd User Sessions and template unit files (e.g., [email protected]) to easily manage and horizontally scale the worker processes. This article is part of the CakeDC Advent Calendar 2025 (December 9th 2025)

Notifications That Actually Work

This article is part of the CakeDC Advent Calendar 2025 (December 8th 2025) Building a modern application without notifications is like running a restaurant without telling customers their food is ready. Users need to know what's happening. An order shipped. A payment went through. Someone mentioned them in a comment. These moments matter, and how you communicate them matters even more. I've built notification systems before. They always started simple. Send an email when something happens. Easy enough. Then someone wants in-app notifications. Then someone needs Slack alerts. Then the mobile team wants push notifications. Before you know it, you're maintaining five different notification implementations, each with its own bugs and quirks. That's exactly why the CakePHP Notification plugin exists. It brings order to the chaos by giving you one consistent way to send notifications, regardless of where they're going or how they're being delivered. The core notification system (crustum/notification) provides the foundation with database and email support built in.

Two Worlds of Notifications

Notifications naturally fall into two categories, and understanding this split helps you architect your system correctly. The first category is what I call presence notifications. These are for users actively using your application. They're sitting there, browser open, working away. You want to tell them something right now. A new message arrived. Someone approved their request. The background job finished. These notifications need to appear instantly in the UI, update the notification bell, and maybe play a sound. They live in your database and get pushed to the browser through WebSockets. The second category is reach-out notifications. These go find users wherever they are. Email reaches them in their inbox. SMS hits their phone. Slack pings them in their workspace. Telegram messages appear on every device they own. These notifications cross boundaries, reaching into other platforms and services to deliver your message. Understanding this distinction is crucial because these two types of notifications serve different purposes and require different technical approaches. Presence notifications need a database to store history and WebSocket connections for real-time delivery. Reach-out notifications need API integrations and reliable delivery mechanisms.

The Beautiful Part: One Interface

Here's where it gets good. Despite these two worlds being completely different, you write the same code to send both types. Your application doesn't care whether a notification goes to the database, WebSocket, email, or Slack. You just say "notify this user" and the system handles the rest. $user = $this->Users->get($userId); $user->notify(new OrderShipped($order)); That's it. The OrderShipped notification might go to the database for the in-app notification bell, get broadcast via WebSocket for instant delivery, and send an email with tracking information. All from that one line of code.

Web interface for notifications

Let's talk about the in-app notification experience first. This is what most users interact with daily. That little bell icon in the corner of your application. Click it, see your notifications. It's so common now that users expect it. The NotificationUI plugin (crustum/notification-ui) provides a complete notification interface out of the box. There's a bell widget that you drop into your layout, and it just works. It shows the unread count, displays notifications in a clean interface, marks them as read when clicked, and supports actions like buttons in the notification. You have two display modes to choose from. Dropdown mode gives you the traditional experience where clicking the bell opens a menu below it. Panel mode creates a sticky side panel that slides in from the edge of your screen, similar to what you see in modern admin panels. Setting it up takes just a few lines in your layout template. <?= $this->element('Crustum/NotificationUI.notifications/bell_icon', [ 'mode' => 'panel', 'pollInterval' => 30000, ]) ?> The widget automatically polls the server for new notifications every 30 seconds by default. This works perfectly fine for most applications. Users see new notifications within a reasonable time, and your server isn't overwhelmed with requests. But sometimes 30 seconds feels like forever. When someone sends you a direct message, you want to see it immediately. That's where real-time broadcasting comes in.

Real-Time Broadcasting for Instant Delivery

Adding real-time broadcasting transforms the notification experience. Instead of polling every 30 seconds, new notifications appear instantly through WebSocket connections. The moment someone triggers a notification for you, it pops up in your interface. The beautiful thing is you can combine both approaches. Keep database polling as a fallback, add real-time broadcasting for instant delivery. If the WebSocket connection drops, polling keeps working. When the connection comes back, broadcasting takes over again. Users get reliability and instant feedback. <?php $authUser = $this->request->getAttribute('identity'); ?> <?= $this->element('Crustum/NotificationUI.notifications/bell_icon', [ 'mode' => 'panel', 'enablePolling' => true, 'broadcasting' => [ 'userId' => $authUser->getIdentifier(), 'userName' => $authUser->username, 'pusherKey' => 'app-key', 'pusherHost' => '127.0.0.1', 'pusherPort' => 8080, ], ]) ?> This hybrid approach gives you the best of both worlds. Real-time when possible, reliable fallback always available. Behind the scenes, this uses the Broadcasting (crustum/broadcasting) and BroadcastingNotification (crustum/notification-broadcasting) plugins working together. When you broadcast a notification, it goes through the same WebSocket infrastructure. The NotificationUI plugin handles subscribing to the right channels and updating the interface when broadcasts arrive.

Creating Your Notification Classes

Notifications in CakePHP are just classes. Each notification type gets its own class that defines where it goes and what it contains. This keeps everything organized and makes notifications easy to test. namespace App\Notification; use Crustum\Notification\Notification; use Crustum\Notification\Message\DatabaseMessage; use Crustum\Notification\Message\MailMessage; use Crustum\BroadcastingNotification\Message\BroadcastMessage; use Crustum\BroadcastingNotification\Trait\BroadcastableNotificationTrait; class OrderShipped extends Notification { use BroadcastableNotificationTrait; public function __construct( private $order ) {} public function via($notifiable): array { return ['database', 'broadcast', 'mail']; } public function toDatabase($notifiable): DatabaseMessage { return DatabaseMessage::new() ->title('Order Shipped') ->message("Your order #{$this->order->id} has shipped!") ->actionUrl(Router::url(['controller' => 'Orders', 'action' => 'view', $this->order->id], true)) ->icon('check'); } public function toMail($notifiable): MailMessage { return MailMessage::create() ->subject('Your Order Has Shipped') ->greeting("Hello {$notifiable->name}!") ->line("Great news! Your order #{$this->order->id} has shipped.") ->line("Tracking: {$this->order->tracking_number}") ->action('Track Your Order', ['controller' => 'Orders', 'action' => 'track', $this->order->id]); } public function toBroadcast(EntityInterface|AnonymousNotifiable $notifiable): BroadcastMessage|array { return new BroadcastMessage([ 'title' => 'Order Shipped', 'message' => "Your order #{$this->order->id} has shipped!", 'order_id' => $this->order->id, 'order_title' => $this->order->title, 'tracking_number' => $this->order->tracking_number, 'action_url' => Router::url(['controller' => 'Orders', 'action' => 'view', $this->order->id], true), ]); } public function broadcastOn(): array { return [new PrivateChannel('users.' . $notifiable->id)]; } } The via method tells the system which channels to use. The toDatabase method formats the notification for display in your app. The toMail method creates an email. The toBroadcast method formats the notification for broadcast. The broadcastOn method specifies which WebSocket channels to broadcast to. One notification class, three different formats, all sent automatically when you call notify. That's the power of this approach.

Reach-Out Notifications

Now let's talk about reaching users outside your application. This is where the plugin really shines because there are so many channels available. Email is the classic. Everyone has email. The base notification plugin gives you a fluent API for building beautiful transactional emails. You describe what you want to say using simple methods, and it generates a responsive HTML email with a plain text version automatically. Slack integration (crustum/notification-slack) lets you send notifications to team channels. Perfect for internal alerts, deployment notifications, or monitoring events. You get full support for Slack's Block Kit, so you can create rich, interactive messages with buttons, images, and formatted sections. Telegram (crustum/notification-telegram) reaches users on their phones. Since Telegram has a bot API, you can send notifications directly to users who've connected their Telegram account. The messages support formatting, buttons, and even images. SMS through Seven.io (crustum/notification-seven) gets messages to phones as text messages. This is great for critical alerts, verification codes, or appointment reminders. Things that need immediate attention and work even without internet access. RocketChat (crustum/notification-rocketchat) is perfect if you're using RocketChat for team communication. Send notifications to channels or direct messages, complete with attachments and formatting. The plugin system allows you to add new notification channels easily. You can create a new plugin for a new channel and install it like any other plugin. The brilliant part is that adding any of these channels to a notification is just adding a string to the via array and implementing one method. Want to add Slack to that OrderShipped notification? Add 'slack' to the array and implement toSlack. Done. public function via($notifiable): array { return ['database', 'broadcast', 'mail', 'slack']; } public function toSlack($notifiable): BlockKitMessage { return (new BlockKitMessage()) ->text('Order Shipped') ->headerBlock('Order Shipped') ->sectionBlock(function ($block) { $block->text("Order #{$this->order->id} has shipped!"); $block->field("*Customer:*\n{$notifiable->name}"); $block->field("*Tracking:*\n{$this->order->tracking_number}"); }); } Now when someone's order ships, they get an in-app notification with real-time delivery, an email with full details, and your team gets a Slack message in the orders channel. All automatic.

The Database as Your Notification Store

Every notification sent through the database channel gets stored in a notifications table. This gives you a complete history of what users were notified about and when. The NotifiableBehavior adds methods to your tables for working with notifications. $user = $usersTable->get($userId); $unreadNotifications = $usersTable->unreadNotifications($user)->all(); $readNotifications = $usersTable->readNotifications($user)->all(); $usersTable->markNotificationAsRead($user, $notificationId); $usersTable->markAllNotificationsAsRead($user); The UI widget uses these methods to display notifications and mark them as read. But you can use them anywhere in your application. Maybe you want to show recent notifications on a user's dashboard. Maybe you want to delete old notifications. The methods are there.

Queuing for Performance

Sending notifications, especially external ones, takes time. Making API calls to Slack, Seven.io, or Pusher adds latency to your request. If you're sending to multiple channels, that latency multiplies. The solution is queuing. Implement the ShouldQueueInterface on your notification class, and the system automatically queues notification sending as background jobs. use Crustum\Notification\ShouldQueueInterface; class OrderShipped extends Notification implements ShouldQueueInterface { protected ?string $queue = 'notifications'; } Now when you call notify, it returns immediately. The actual notification sending happens in a background worker. Your application stays fast, users don't wait, and notifications still get delivered reliably.

Testing Your Notifications

Testing notification systems used to be painful. You'd either send test notifications to real services (annoying) or mock everything (fragile). The NotificationTrait makes testing clean and simple. use Crustum\Notification\TestSuite\NotificationTrait; class OrderTest extends TestCase { use NotificationTrait; public function testOrderShippedNotification() { $user = $this->Users->get(1); $order = $this->Orders->get(1); $user->notify(new OrderShipped($order)); $this->assertNotificationSentTo($user, OrderShipped::class); $this->assertNotificationSentToChannel('mail', OrderShipped::class); $this->assertNotificationSentToChannel('database', OrderShipped::class); } } The trait captures all notifications instead of sending them. You can assert that the right notifications were sent to the right users through the right channels. You can even inspect the notification data to verify it contains the correct information. There are many diferent assertions you can use to test your notifications. You can assert that the right notifications were sent to the right users through the right channels. You can even inspect the notification data to verify it contains the correct information.

Localization

Applications serve users in different languages, and your notifications should respect that. The notification system integrates with CakePHP's localization system. $user->notify((new OrderShipped($order))->locale('es')); Even better, users can have a preferred locale stored on their entity. Implement a preferredLocale method or property, and notifications automatically use it. class User extends Entity { public function getPreferredLocale(): string { return $this->locale; } } Now you don't even need to specify the locale. The system figures it out automatically and sends notifications in each user's preferred language.

Bringing It Together

What I like about this notification system is how it scales with your needs. Start simple. Just database notifications. Add real-time broadcasting when you want instant delivery. Add email when you need to reach users outside your app. Add Slack when your team wants internal alerts. Add SMS for critical notifications. Each addition is incremental. You're not rewriting your notification system each time. You're adding channels to the via array and implementing format methods. The core logic stays the same. The separation between presence notifications and reach-out notifications makes architectural sense. They serve different purposes, use different infrastructure, but share the same interface. This makes your code clean, your system maintainable, and your notifications reliable. Whether you're building a small application with basic email notifications or a complex system with real-time updates, database history, email, SMS, and team chat integration, you're using the same patterns. The same notification classes. The same notify method. That consistency is what makes the system powerful. You're not context switching between different notification implementations. You're just describing what should be notified, who should receive it, and how it should be formatted. The system handles the rest. This article is part of the CakeDC Advent Calendar 2025 (December 8th 2025)

We Bake with CakePHP