CakeDC Blog

TIPS, INSIGHTS AND THE LATEST FROM THE EXPERTS BEHIND CAKEPHP

Upgrading to CakePHP 4

As you know, CakePHP announced the version 4.x last December.I recommend that you consider upgrading your applications to the next version, to keep up to date and get all the benefits. Now, let's see how to bake!

 

Step 1: Upgrade PHP

First things first, if you are not running on PHP 7.2 or higher, you will need to upgrade PHP before updating CakePHP. CakePHP 4.0 requires a minimum of PHP 7.2.

 

Step 2: Upgrade Templates and Resources

There is an upgrade CLI tool for rename and moving the templates and resources:

 

Templates and Resources must have been moved and renamed, check the result below:

* This project doesn't have Resources files

 

Now, let's create a new constant for Resources on /config/paths.php:

Finally, update the paths on config/app.php:

 

Step 3: Upgrade CakePHP

The next step is optional (and the Migration Guide included this) - run the rector command to automatically fix many deprecated method calls:

The rector applied on codebase some return type declarations:

https://github.com/rafaelqueiroz/cakephp-upgrade-sample/commit/d7e5c2ecc5dc28045700a270721f07098a8e189c?branch=d7e5c2ecc5dc28045700a270721f07098a8e189c&diff=split

Pay attention: It is important to apply rector before you upgrade your dependencies.

 

Upgrade CakePHP and PHPUnit:

PHPUnit can be upgraded easily. Most of the time, the --update-with-dependencies doesn’t work with me for CakePHP:

The root of the issue is the packages using Caret Version Range, so let’s update debug_kit, migrations and bake using editor:

 

Here we go:

 

Now, let see how the project looks:

Here, we have few deprecations and warnings. Do you remember I mentioned the rector is optional? So, the question is the rector and it's not always able to handle these issues.

 

I will use the PHPStan to fix this - we will install with composer:

Now, we can run the phpstan analyse and fix the issues:

 

It's up to you how much effort you will put in with PHPStan issues. I recommend fixing everything. For this post, I did fix only what was needed to run the project after the update, you can check the fixes on this commit.

 

After the last fixes, the project is running well: 

That’s all? No. But we upgraded CakePHP? Yes.

Real applications probably use many plugins, and if these plugins don't have a version for CakePHP 4, you will need to update. Depending on the size and level of complexity of the project, the upgrade could be hard, but never impossible. 

 

If you do not feel confident or your company would like to outsource support for this, don't hesitate to contact us at Cake Development Corporation.

Our team is offering a full upgrade from CakePHP 2/3 to CakePHP 4. This will be a migration of your current application code to make it compatible with CakePHP 4 features, plugins, security settings, etc. We will be doing these migration services for a special rate - something we have never done before! Learn more about our Upgrade Services

You can check the codebase of the examples on this repository. The branch upgrade has all steps by commit. 

With every release CakePHP gets better, and version 4.x is no exception. There are many benefits that come with upgrading, and it makes baking a lot easier.

Latest articles

Scaling Task Processing in CakePHP: Achieving Concurrency with Multiple...

This article is part of the CakeDC Advent Calendar 2025 (December 9th 2025)

Introduction: need of Concurrency

While offloading long-running tasks to an asynchronous queue solves the initial web request bottleneck, relying on a single queue worker introduces a new, serious point of failure and bottleneck. This single-threaded approach transfers the issue from the web server to the queue system itself.

Bottlenecks of Single-Worker Queue Processing

The fundamental limitation in the standard web request lifecycle is its synchronous, single-threaded architecture. This design mandates that a user's request must wait for all associated processing to fully complete before a response can be returned. The Problem: Single-Lane Processing Imagine a queue worker as a single cashier at a very busy bank . Each item in the queue (the "job") represents a customer.
  1. Job Blocking (The Long Transaction): If the single cashier encounters a customer with an extremely long or slow transaction (e.g., generating a massive report, bulk sending 100,000 emails, or waiting for a slow API), every other customer must wait for that transaction to complete.
  2. Queue Backlog Accumulation: New incoming jobs (customers) pile up rapidly in the queue. This is known as a queue backlog. The time between a job being put on the queue and it starting to execute (Job Latency) skyrockets.
  3. Real-Time Failure: If a job requires an action to happen now (like sending a password reset email), the backlog means that action is critically delayed, potentially breaking the user experience or application logic.
  4. Worker Vulnerability and Downtime: If this single worker crashes (due to a memory limit or unhandled error) or is temporarily taken offline for maintenance, queue processing stops entirely. The application suddenly loses its entire asynchronous capability until the worker is manually restarted, resulting in a complete system freeze of all background operations.
To eliminate this bottleneck, queue consumption must be handled by multiple concurrent workers, allowing the system to process many jobs simultaneously and ensuring no single slow job can paralyze the entire queue.

Improved System Throughput and Reliability with Multiple Workers

While introducing a queue solves the initial issue of synchronous blocking, scaling the queue consumption with multiple concurrent workers is what unlocks significant performance gains and reliability for the application's background processes.

Key Benefits of Multi-Worker Queue Consumption

  • Consistent, Low Latency: Multiple workers process jobs in parallel, preventing any single slow or heavy job (e.g., report generation) from causing a queue backlog. This ensures time-sensitive tasks, like password resets, are processed quickly, maintaining instant user feedback.
  • Enhanced Reliability and Resilience: If one worker crashes, the other workers instantly take over** the remaining jobs. This prevents a complete system freeze and ensures queue processing remains continuous.
  • Decoupling and Effortless Scaling: The queue facilitates decoupling. When background load increases, you simply deploy more CakePHP queue workers. This horizontal scaling is simple, cost-effective, and far more efficient than scaling the entire web server layer.

Workflows that Benefit from Multi-Worker Concurrency

These examples show why using multiple concurrent workers with the CakePHP Queue plugin (https://github.com/cakephp/queue) is essential for performance and reliability:
  • Mass Email Campaigns (Throughput): Workers process thousands of emails simultaneously**, drastically cutting the time for large campaigns and ensuring the entire list is delivered fast.
  • Large Media Processing (Parallelism): Multiple workers handle concurrent user uploads or divide up thumbnail generation** tasks. This speeds up content delivery by preventing one heavy image from blocking all others.
  • High-Volume API Synchronization (Consistency): Workers ensure that unpredictable external API latency from one service doesn't paralyze** updates to another. This maintains a consistent, uninterrupted flow of data across all integrations.

The Job

Lets say that you have the queue job like this: <?php declare(strict_types=1); namespace App\Job; use Cake\Mailer\Mailer; use Cake\ORM\TableRegistry; use Cake\Queue\Job\JobInterface; use Cake\Queue\Job\Message; use Interop\Queue\Processor; /** * SendBatchNotification job */ class SendBatchNotificationJob implements JobInterface { /** * The maximum number of times the job may be attempted. * * @var int|null */ public static $maxAttempts = 10; /** * We need to set the shouldBeUnique to true to avoid race condition with multiple queue workers * * @var bool */ public static $shouldBeUnique = true; /** * Executes logic for SendBatchNotificationJob * * @param \Cake\Queue\Job\Message $message job message * @return string|null */ public function execute(Message $message): ?string { // 1. Retrieve job data from the message object $data = $message->getArgument('data'); $userId = $data['user_id'] ?? null; if (!$userId) { // Log error or skip, but return ACK to remove from queue return Processor::ACK; } try { // 2. Load user and prepare email $usersTable = TableRegistry::getTableLocator()->get('Users'); $user = $usersTable->get($userId); $mailer = new Mailer('default'); $mailer ->setTo($user->email) ->setSubject('Your batch update is complete!') ->setBodyString("Hello {$user->username}, \n\nThe recent batch process for your account has finished."); // 3. Send the email (I/O operation that can benefit from concurrency) $mailer->send(); } catch (\Exception $e) { // If the email server fails, we can tell the worker to try again later // The queue system will handle the delay and retry count. return Processor::REQUEUE; } // Success: Acknowledge the job to remove it from the queue return Processor::ACK; } } Setting $shouldBeUnique = true; in a CakePHP Queue Job class is crucial for preventing a race condition when multiple queue workers consume the same queue, as it ensures only one instance of the job is processed at any given time, thus avoiding duplicate execution or conflicting updates. In another part of the application you have code that enqueues the job like this: // In a Controller, Command, or Service Layer: use Cake\ORM\TableRegistry; use Cake\Queue\QueueManager; use App\Job\SendBatchNotificationJob; // Our new Job class // Find all users who need notification (e.g., 500 users) $usersToNotify = TableRegistry::getTableLocator()->get('Users')->find()->where(['is_notified' => false]); foreach ($usersToNotify as $user) { // Each loop iteration dispatches a distinct, lightweight job $data = [ 'user_id' => $user->id, ]; // Dispatch the job using the JobInterface class name QueueManager::push(SendBatchNotificationJob::class, $data); } // Result: 500 jobs are ready in the queue. By pushing 500 separate jobs, you allow 10, 20, or even 50 concurrent workers to pick up these small jobs and run the email sending logic in parallel, drastically reducing the total time it takes for all 500 users to receive their notification.

Implementing Concurrency with multiple queue workers

In modern Linux distributions, systemd is the preferred init and service manager. By leveraging User Sessions and the Lingering feature, we can run the CakePHP worker as a dedicated, managed service without needing root privileges for the process itself, offering excellent stability and integration.

SystemD User Sessions

Prerequisite: The Lingering User Session

For a service to run continuously in the background, even after the user logs out, we must enable the lingering feature for the user account that will run the workers (e.g., a service user named appuser). Enabling Lingering: Bash sudo loginctl enable-linger appuser This ensures the appuser's systemd user session remains active indefinitely, allowing the worker processes to survive server reboots and user logouts.

Creating the Systemd User Unit File

We define the worker service using a unit file, placed in the user's systemd configuration directory (~/.config/systemd/user/).
  • File Location: ~appuser/.config/systemd/user/[email protected]
  • Purpose of @: The @ symbol makes this a template unit. This allows us to use a single file to create multiple, distinct worker processes, which is key to achieving concurrency.
[email protected] Content: Ini, TOML [Unit] Description=CakePHP Queue Worker #%i After=network.target [Service] # We use the full path to the PHP executable ExecStart=/usr/bin/php /path/to/your/app/bin/cake queue worker # Set the current working directory to the application root WorkingDirectory=/path/to/your/app # Restart the worker if it fails (crashes, memory limit exceeded, etc.) Restart=always # Wait a few seconds before attempting a restart RestartSec=5 # Output logs to the systemd journal StandardOutput=journal StandardError=journal # Ensure permissions are correct and process runs as the user User=appuser [Install] WantedBy=default.target

Achieving Concurrency (Scaling the Workers)

Concurrency is achieved by enabling multiple instances of this service template, distinguished by the suffix provided in the instance name (e.g., -1, -2, -3). Reload and Start Instances: After creating the file, the user session must be reloaded, and the worker instances must be started and enabled: Reload Daemon (as appuser): Bash systemctl --user daemon-reload Start and Enable Concurrent Workers (as appuser): To run three workers concurrently: Bash # Start Worker Instance 1 systemctl --user enable --now [email protected] # Start Worker Instance 2 systemctl --user enable --now [email protected] # Start Worker Instance 3 systemctl --user enable --now [email protected] Result: The system now has three independent and managed processes running the bin/cake queue worker command, achieving a concurrent processing pool of three jobs.

Monitoring and Management

systemd provides powerful tools for managing and debugging the worker pool: Check Concurrency Status: Bash systemctl --user status 'cakephp-worker@*' This command displays the status of all concurrent worker instances, showing which are running or if any have failed and been automatically restarted. Viewing Worker Logs: All output is directed to the systemd journal: Bash journalctl --user -u 'cakephp-worker@*' -f This allows developers to inspect errors and task completion messages across all concurrent workers from a single, centralized log. Using systemd and lingering is highly advantageous as it eliminates the need for a third-party tool, integrates naturally with system logging, and provides reliable process management for a robust, concurrent task environment.

Summary

Shifting from a single worker to multiple concurrent workers is essential to prevent bottlenecks and system freezes caused by slow jobs, ensuring high reliability and low latency for asynchronous tasks. One robust way to achieve this concurrency in CakePHP applications is by using Systemd User Sessions and template unit files (e.g., [email protected]) to easily manage and horizontally scale the worker processes. This article is part of the CakeDC Advent Calendar 2025 (December 9th 2025)

Notifications That Actually Work

This article is part of the CakeDC Advent Calendar 2025 (December 8th 2025) Building a modern application without notifications is like running a restaurant without telling customers their food is ready. Users need to know what's happening. An order shipped. A payment went through. Someone mentioned them in a comment. These moments matter, and how you communicate them matters even more. I've built notification systems before. They always started simple. Send an email when something happens. Easy enough. Then someone wants in-app notifications. Then someone needs Slack alerts. Then the mobile team wants push notifications. Before you know it, you're maintaining five different notification implementations, each with its own bugs and quirks. That's exactly why the CakePHP Notification plugin exists. It brings order to the chaos by giving you one consistent way to send notifications, regardless of where they're going or how they're being delivered. The core notification system (crustum/notification) provides the foundation with database and email support built in.

Two Worlds of Notifications

Notifications naturally fall into two categories, and understanding this split helps you architect your system correctly. The first category is what I call presence notifications. These are for users actively using your application. They're sitting there, browser open, working away. You want to tell them something right now. A new message arrived. Someone approved their request. The background job finished. These notifications need to appear instantly in the UI, update the notification bell, and maybe play a sound. They live in your database and get pushed to the browser through WebSockets. The second category is reach-out notifications. These go find users wherever they are. Email reaches them in their inbox. SMS hits their phone. Slack pings them in their workspace. Telegram messages appear on every device they own. These notifications cross boundaries, reaching into other platforms and services to deliver your message. Understanding this distinction is crucial because these two types of notifications serve different purposes and require different technical approaches. Presence notifications need a database to store history and WebSocket connections for real-time delivery. Reach-out notifications need API integrations and reliable delivery mechanisms.

The Beautiful Part: One Interface

Here's where it gets good. Despite these two worlds being completely different, you write the same code to send both types. Your application doesn't care whether a notification goes to the database, WebSocket, email, or Slack. You just say "notify this user" and the system handles the rest. $user = $this->Users->get($userId); $user->notify(new OrderShipped($order)); That's it. The OrderShipped notification might go to the database for the in-app notification bell, get broadcast via WebSocket for instant delivery, and send an email with tracking information. All from that one line of code.

Web interface for notifications

Let's talk about the in-app notification experience first. This is what most users interact with daily. That little bell icon in the corner of your application. Click it, see your notifications. It's so common now that users expect it. The NotificationUI plugin (crustum/notification-ui) provides a complete notification interface out of the box. There's a bell widget that you drop into your layout, and it just works. It shows the unread count, displays notifications in a clean interface, marks them as read when clicked, and supports actions like buttons in the notification. You have two display modes to choose from. Dropdown mode gives you the traditional experience where clicking the bell opens a menu below it. Panel mode creates a sticky side panel that slides in from the edge of your screen, similar to what you see in modern admin panels. Setting it up takes just a few lines in your layout template. <?= $this->element('Crustum/NotificationUI.notifications/bell_icon', [ 'mode' => 'panel', 'pollInterval' => 30000, ]) ?> The widget automatically polls the server for new notifications every 30 seconds by default. This works perfectly fine for most applications. Users see new notifications within a reasonable time, and your server isn't overwhelmed with requests. But sometimes 30 seconds feels like forever. When someone sends you a direct message, you want to see it immediately. That's where real-time broadcasting comes in.

Real-Time Broadcasting for Instant Delivery

Adding real-time broadcasting transforms the notification experience. Instead of polling every 30 seconds, new notifications appear instantly through WebSocket connections. The moment someone triggers a notification for you, it pops up in your interface. The beautiful thing is you can combine both approaches. Keep database polling as a fallback, add real-time broadcasting for instant delivery. If the WebSocket connection drops, polling keeps working. When the connection comes back, broadcasting takes over again. Users get reliability and instant feedback. <?php $authUser = $this->request->getAttribute('identity'); ?> <?= $this->element('Crustum/NotificationUI.notifications/bell_icon', [ 'mode' => 'panel', 'enablePolling' => true, 'broadcasting' => [ 'userId' => $authUser->getIdentifier(), 'userName' => $authUser->username, 'pusherKey' => 'app-key', 'pusherHost' => '127.0.0.1', 'pusherPort' => 8080, ], ]) ?> This hybrid approach gives you the best of both worlds. Real-time when possible, reliable fallback always available. Behind the scenes, this uses the Broadcasting (crustum/broadcasting) and BroadcastingNotification (crustum/notification-broadcasting) plugins working together. When you broadcast a notification, it goes through the same WebSocket infrastructure. The NotificationUI plugin handles subscribing to the right channels and updating the interface when broadcasts arrive.

Creating Your Notification Classes

Notifications in CakePHP are just classes. Each notification type gets its own class that defines where it goes and what it contains. This keeps everything organized and makes notifications easy to test. namespace App\Notification; use Crustum\Notification\Notification; use Crustum\Notification\Message\DatabaseMessage; use Crustum\Notification\Message\MailMessage; use Crustum\BroadcastingNotification\Message\BroadcastMessage; use Crustum\BroadcastingNotification\Trait\BroadcastableNotificationTrait; class OrderShipped extends Notification { use BroadcastableNotificationTrait; public function __construct( private $order ) {} public function via($notifiable): array { return ['database', 'broadcast', 'mail']; } public function toDatabase($notifiable): DatabaseMessage { return DatabaseMessage::new() ->title('Order Shipped') ->message("Your order #{$this->order->id} has shipped!") ->actionUrl(Router::url(['controller' => 'Orders', 'action' => 'view', $this->order->id], true)) ->icon('check'); } public function toMail($notifiable): MailMessage { return MailMessage::create() ->subject('Your Order Has Shipped') ->greeting("Hello {$notifiable->name}!") ->line("Great news! Your order #{$this->order->id} has shipped.") ->line("Tracking: {$this->order->tracking_number}") ->action('Track Your Order', ['controller' => 'Orders', 'action' => 'track', $this->order->id]); } public function toBroadcast(EntityInterface|AnonymousNotifiable $notifiable): BroadcastMessage|array { return new BroadcastMessage([ 'title' => 'Order Shipped', 'message' => "Your order #{$this->order->id} has shipped!", 'order_id' => $this->order->id, 'order_title' => $this->order->title, 'tracking_number' => $this->order->tracking_number, 'action_url' => Router::url(['controller' => 'Orders', 'action' => 'view', $this->order->id], true), ]); } public function broadcastOn(): array { return [new PrivateChannel('users.' . $notifiable->id)]; } } The via method tells the system which channels to use. The toDatabase method formats the notification for display in your app. The toMail method creates an email. The toBroadcast method formats the notification for broadcast. The broadcastOn method specifies which WebSocket channels to broadcast to. One notification class, three different formats, all sent automatically when you call notify. That's the power of this approach.

Reach-Out Notifications

Now let's talk about reaching users outside your application. This is where the plugin really shines because there are so many channels available. Email is the classic. Everyone has email. The base notification plugin gives you a fluent API for building beautiful transactional emails. You describe what you want to say using simple methods, and it generates a responsive HTML email with a plain text version automatically. Slack integration (crustum/notification-slack) lets you send notifications to team channels. Perfect for internal alerts, deployment notifications, or monitoring events. You get full support for Slack's Block Kit, so you can create rich, interactive messages with buttons, images, and formatted sections. Telegram (crustum/notification-telegram) reaches users on their phones. Since Telegram has a bot API, you can send notifications directly to users who've connected their Telegram account. The messages support formatting, buttons, and even images. SMS through Seven.io (crustum/notification-seven) gets messages to phones as text messages. This is great for critical alerts, verification codes, or appointment reminders. Things that need immediate attention and work even without internet access. RocketChat (crustum/notification-rocketchat) is perfect if you're using RocketChat for team communication. Send notifications to channels or direct messages, complete with attachments and formatting. The plugin system allows you to add new notification channels easily. You can create a new plugin for a new channel and install it like any other plugin. The brilliant part is that adding any of these channels to a notification is just adding a string to the via array and implementing one method. Want to add Slack to that OrderShipped notification? Add 'slack' to the array and implement toSlack. Done. public function via($notifiable): array { return ['database', 'broadcast', 'mail', 'slack']; } public function toSlack($notifiable): BlockKitMessage { return (new BlockKitMessage()) ->text('Order Shipped') ->headerBlock('Order Shipped') ->sectionBlock(function ($block) { $block->text("Order #{$this->order->id} has shipped!"); $block->field("*Customer:*\n{$notifiable->name}"); $block->field("*Tracking:*\n{$this->order->tracking_number}"); }); } Now when someone's order ships, they get an in-app notification with real-time delivery, an email with full details, and your team gets a Slack message in the orders channel. All automatic.

The Database as Your Notification Store

Every notification sent through the database channel gets stored in a notifications table. This gives you a complete history of what users were notified about and when. The NotifiableBehavior adds methods to your tables for working with notifications. $user = $usersTable->get($userId); $unreadNotifications = $usersTable->unreadNotifications($user)->all(); $readNotifications = $usersTable->readNotifications($user)->all(); $usersTable->markNotificationAsRead($user, $notificationId); $usersTable->markAllNotificationsAsRead($user); The UI widget uses these methods to display notifications and mark them as read. But you can use them anywhere in your application. Maybe you want to show recent notifications on a user's dashboard. Maybe you want to delete old notifications. The methods are there.

Queuing for Performance

Sending notifications, especially external ones, takes time. Making API calls to Slack, Seven.io, or Pusher adds latency to your request. If you're sending to multiple channels, that latency multiplies. The solution is queuing. Implement the ShouldQueueInterface on your notification class, and the system automatically queues notification sending as background jobs. use Crustum\Notification\ShouldQueueInterface; class OrderShipped extends Notification implements ShouldQueueInterface { protected ?string $queue = 'notifications'; } Now when you call notify, it returns immediately. The actual notification sending happens in a background worker. Your application stays fast, users don't wait, and notifications still get delivered reliably.

Testing Your Notifications

Testing notification systems used to be painful. You'd either send test notifications to real services (annoying) or mock everything (fragile). The NotificationTrait makes testing clean and simple. use Crustum\Notification\TestSuite\NotificationTrait; class OrderTest extends TestCase { use NotificationTrait; public function testOrderShippedNotification() { $user = $this->Users->get(1); $order = $this->Orders->get(1); $user->notify(new OrderShipped($order)); $this->assertNotificationSentTo($user, OrderShipped::class); $this->assertNotificationSentToChannel('mail', OrderShipped::class); $this->assertNotificationSentToChannel('database', OrderShipped::class); } } The trait captures all notifications instead of sending them. You can assert that the right notifications were sent to the right users through the right channels. You can even inspect the notification data to verify it contains the correct information. There are many diferent assertions you can use to test your notifications. You can assert that the right notifications were sent to the right users through the right channels. You can even inspect the notification data to verify it contains the correct information.

Localization

Applications serve users in different languages, and your notifications should respect that. The notification system integrates with CakePHP's localization system. $user->notify((new OrderShipped($order))->locale('es')); Even better, users can have a preferred locale stored on their entity. Implement a preferredLocale method or property, and notifications automatically use it. class User extends Entity { public function getPreferredLocale(): string { return $this->locale; } } Now you don't even need to specify the locale. The system figures it out automatically and sends notifications in each user's preferred language.

Bringing It Together

What I like about this notification system is how it scales with your needs. Start simple. Just database notifications. Add real-time broadcasting when you want instant delivery. Add email when you need to reach users outside your app. Add Slack when your team wants internal alerts. Add SMS for critical notifications. Each addition is incremental. You're not rewriting your notification system each time. You're adding channels to the via array and implementing format methods. The core logic stays the same. The separation between presence notifications and reach-out notifications makes architectural sense. They serve different purposes, use different infrastructure, but share the same interface. This makes your code clean, your system maintainable, and your notifications reliable. Whether you're building a small application with basic email notifications or a complex system with real-time updates, database history, email, SMS, and team chat integration, you're using the same patterns. The same notification classes. The same notify method. That consistency is what makes the system powerful. You're not context switching between different notification implementations. You're just describing what should be notified, who should receive it, and how it should be formatted. The system handles the rest. This article is part of the CakeDC Advent Calendar 2025 (December 8th 2025)

Scaling Your CakePHP App: From Monolith to Distributed Powerhouse

This article is part of the CakeDC Advent Calendar 2025 (December 7th 2025) Your CakePHP application is a success story – users love it, and traffic is booming! But what happens when that single, mighty server starts to groan under the load? That's when you need to think about scaling. In this article, we'll dive into the world of application scaling, focusing on how to transform your regular CakePHP project into a horizontally scalable powerhouse. We'll cover why, when, and how to make the necessary changes to your application and infrastructure.

Vertical vs. Horizontal Scaling: What's the Difference?

Before we jump into the "how," let's clarify the two fundamental ways to scale any application:
  1. Vertical Scaling (Scaling Up):
    • Concept: Adding more resources (CPU, RAM, faster storage) to your existing server. Think of it as upgrading your car's engine.
    • Pros: Simpler to implement initially, no major architectural changes needed.
    • Cons: Hits a hard limit (you can only get so much RAM or CPU on a single machine), higher cost for diminishing returns, and still a single point of failure.
  2. Horizontal Scaling (Scaling Out):
    • Concept: Adding more servers to distribute the load. This is like adding more cars to your fleet.
    • Pros: Virtually limitless scalability (add as many servers as needed), high availability (if one server fails, others take over), better cost-efficiency at large scales.
    • Cons: Requires significant architectural changes, more complex to set up and manage.

When Do You Need to Scale Horizontally?

While vertical scaling can buy you time, here are the key indicators that it's time to invest in horizontal scaling for your CakePHP application:
  • Hitting Performance Ceilings: Your server's CPU or RAM regularly maxes out, even after vertical upgrades.
  • Single Point of Failure Anxiety: You dread a server crash because it means your entire application goes down.
  • Inconsistent Performance: Your application's response times are erratic during peak hours.
  • Anticipated Growth: You're expecting a marketing campaign or feature launch that will significantly increase traffic.
  • High Availability Requirements: Your business demands minimal downtime, making a single server unacceptable.

From Regular to Resilient: Necessary Changes for CakePHP

The core principle for horizontal scaling is that your application servers must become "stateless." This means any server should be able to handle any user's request at any time, without relying on local data. If a user lands on App Server A for one request and App Server B for the next, both servers must act identically. Here's what needs to change in a typical CakePHP, MySQL, cache, and logs setup:

1. Sessions: The Single Most Critical Change

  • Problem: By default, CakePHP stores session files locally (tmp/sessions). If a user's request is handled by a different server, their session is lost.
  • Solution: Centralize session storage using a distributed cache system like Redis or Memcached.
  • CakePHP Action: Modify config/app.php to tell CakePHP to use a cache handler for sessions, pointing to your centralized Redis instance, Consult the official RedisEngine Options documentation.
// config/app.php 'Session' => [ 'defaults' => 'cache', // Use 'cache' instead of 'php' (file-based) 'handler' => [ 'config' => 'session_cache' // Name of the cache config to use ], ], // ... 'Cache' => [ 'session_cache' => [ 'className' => 'Redis', 'host' => 'your_redis_server_ip_or_hostname', 'port' => 6379, 'duration' => '+1 days', 'prefix' => 'cake_session_', ], // ... (ensure 'default' and '_cake_core_' also use Redis) ]

2. Application Cache

  • Problem: Local cache (tmp/cache) means each server builds its own cache, leading to inefficiency and potential inconsistencies.
  • Solution: Just like sessions, point all your CakePHP cache configurations (default, _cake_core_, etc.) to your centralized Redis or Memcached server.

3. User Uploaded Files

  • Problem: If a user uploads a profile picture to App Server A's local storage (webroot/img/uploads/), App Server B won't find it.
  • Solution: Use a shared, centralized file storage system.
  • CakePHP Action:
    • Recommended: Implement Object Storage (e.g., AWS S3, DigitalOcean Spaces). This involves changing your file upload logic to send files directly to S3 via an SDK or plugin, and serving them from there.
    • Alternative: Mount a Network File System (NFS) share (e.g., AWS EFS) at your upload directory (webroot/img/uploads) across all app servers. This requires no code changes but can introduce performance bottlenecks and complexity at scale.

4. Application Logs

  • Problem: Log files (logs/error.log) are scattered across multiple servers, making debugging a nightmare.
  • Solution: Centralize your logging.
  • CakePHP Action: Configure CakePHP's Log engine to use syslog (a standard logging protocol).To configure this, see the Logging to Syslog section in the documentation. Then, deploy a log collector (like Fluentd, Logstash) on each app server to forward these logs to a centralized logging system (e.g., Elasticsearch/Kibana, Papertrail, DataDog).

The Database Bottleneck: Database Replication (MySQL & PostgreSQL)

At this stage, your CakePHP application is fully stateless. However, your single database server now becomes the bottleneck. Whether you are using MySQL or PostgreSQL, the solution is Replication.

Understanding Replication

  • Primary (Writer): Handles all write operations (INSERT, UPDATE, DELETE).
  • Replica (Reader): Handles read operations (SELECT).
  • For MySQL: The Primary copies data changes to Replicas using the Binary Log (Binlog).
  • For PostgreSQL: It uses Streaming Replication via WAL (Write-Ahead Logging) files to keep replicas in sync.
CakePHP Configuration Note: CakePHP makes switching easy. In your config/app.php, you simply define your roles. The driver (Cake\Database\Driver\Mysql or Cake\Database\Driver\Postgres) handles the specific connection protocol underneath. You don't need to change your query logic.

The Challenge: "Replica Lag"

Because replication is typically asynchronous, there's always a delay (lag) between a write on the Primary and when it becomes available on the Replicas. The Immediate Consistency Problem:
  1. User updates their profile (write to Primary).
  2. App immediately redirects to the profile page (read from Replica).
  3. Due to lag, the Replica might not yet have the updated data. The user sees old information or a "not found" error.
Mitigating this lag to guarantee a user sees their changes immediately often requires the application to intelligently direct reads to the Primary right after a write, before reverting to the Replicas.

Solutions for the Database Bottleneck

While your initial focus should be separating reads and writes in CakePHP, the Primary server will eventually hit its limits for write volume. Future solutions for database scaling depend heavily on the type of database server you use (Standard MySQL, Managed Cloud DB, MySQL Cluster, etc.). Here are common advanced solutions for when the Primary MySQL server becomes the final performance constraint:
  • Database Proxies (Connection Pooling):
    • For MySQL: Tools like ProxySQL route queries automatically and split reads/writes.
    • For PostgreSQL: PgBouncer is the industry standard for connection pooling to prevent overhead, often paired with Pgpool-II for load balancing and read/write splitting.
  • High Availability Clusters:
    • MySQL: Uses Group Replication or Galera Cluster.
    • PostgreSQL: Tools like Patroni are widely used to manage high availability and automatic failover.

Local Testing: Scaling Your CakePHP App with Docker

Now that we understand the theory, let's see it in action with your actual CakePHP application. We will use Docker Compose to spin up a cluster of 3 application nodes, a Load Balancer, Redis, and MySQL. To make this easy, we won't even build a custom Docker image. We will use the popular webdevops/php-nginx image, which comes pre-configured for PHP applications, if you already have a Docker container in your project, you can use that. You only need to add two files to the root of your CakePHP project.
  1. nginx.conf (The Load Balancer Config) This file configures an external Nginx container to distribute traffic among your 3 CakePHP application nodes.
upstream backend_hosts { # 'app' matches the service name in docker-compose # Docker resolves this to the IPs of all 3 replicas server app:80; } server { listen 80; location / { proxy_pass http://backend_hosts; # Pass necessary headers so CakePHP knows it's behind a proxy proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } }
  1. docker-compose.yml (The Cluster Infrastructure) Here we define the architecture. We mount your current local code into the containers so you don't need to rebuild anything.
version: '3.8' services: # Your CakePHP Application Nodes app: image: webdevops/php-nginx:8.2 # Pre-built image with PHP 8.2 & Nginx # We do NOT map ports here (e.g., "80:80") to avoid conflicts between replicas deploy: replicas: 3 # <--- Runs 3 instances of your CakePHP app volumes: - ./:/app # Mount your current project code into the container environment: # 1. Tell the image where CakePHP's webroot is WEB_DOCUMENT_ROOT: /app/webroot # 2. Inject configuration for app.php DEBUG: "true" SECURITY_SALT: "ensure-this-is-long-and-identical-across-nodes" # 3. Database Config (Connecting to the 'db' service) MYSQL_HOST: db MYSQL_USERNAME: my_user MYSQL_PASSWORD: my_password MYSQL_DATABASE: my_cake_app # 4. Redis Config (Session & Cache) REDIS_HOST: redis depends_on: - db - redis networks: - cake_cluster # The Main Load Balancer (Nginx) lb: image: nginx:alpine volumes: - ./nginx.conf:/etc/nginx/conf.d/default.conf:ro ports: - "8080:80" # Access your app at localhost:8080 depends_on: - app networks: - cake_cluster # Shared Services redis: image: redis:alpine networks: - cake_cluster db: image: mysql:8.0 environment: MYSQL_ROOT_PASSWORD: root MYSQL_DATABASE: my_cake_app MYSQL_USER: my_user MYSQL_PASSWORD: my_password networks: - cake_cluster networks: cake_cluster:

How to Run the Test

  1. Configure app.php: Ensure your config/app.php is reading the environment variables (e.g., getenv('MYSQL_HOST') and getenv('REDIS_HOST')) as discussed earlier.
  2. Launch: Run the cluster:
docker compose up -d
  1. Migrate: Run your database migrations on one of the containers (since they all share the same DB, you only need to do this once):
docker compose exec app-1 bin/cake migrations migrate _(Note: Docker might name the container slightly differently, e.g., project_app_1. Use docker ps to check the name)._
  1. Test: Open http://localhost:8080.
You are now interacting with a load-balanced CakePHP cluster. Nginx (the Load Balancer) is receiving your requests on port 8080 and distributing them to one of the 3 app containers. Because you are using Redis for sessions, you can browse seamlessly, even though different servers are handling your requests!

Moving to Production

Simulating this locally with Docker Compose is great for understanding the concepts, but in the real world, we rarely manage scaling by manually editing a YAML file and restarting containers. In a professional environment, more advanced tools take over to manage what we just simulated:
  1. Container Orchestrators (Kubernetes / K8s): The industry standard. Instead of docker-compose, you use Kubernetes. It monitors the health of your containers (Pods). If a CakePHP node stops responding due to memory leaks, Kubernetes kills it and creates a fresh one automatically to ensure you always have your desired number of replicas.
  2. Cloud Load Balancers (AWS ALB / Google Cloud Load Balancing): Instead of configuring your own Nginx container as we did above, you use managed services from your cloud provider (like AWS Application Load Balancer). These are powerful hardware/software solutions that handle traffic distribution, SSL termination, and security before the request even hits your servers.
  3. Auto-Scaling Groups: This is the ultimate goal. You configure rules like: "If average CPU usage exceeds 70%, launch 2 new CakePHP servers. If it drops below 30%, destroy them." This allows your infrastructure to "breathe"—expanding during Black Friday traffic and shrinking (saving money) at night.

Conclusion

Scaling a CakePHP application horizontally is a journey, not a destination. It means shifting from managing a single server to orchestrating a distributed system. By making your application stateless with Redis and leveraging database replication (for either MySQL or PostgreSQL), you empower your CakePHP app to handle massive traffic, offer high availability, and grow far beyond the limits of a single machine. Are you ready to build a truly robust and scalable CakePHP powerhouse? This article is part of the CakeDC Advent Calendar 2025 (December 7th 2025)

We Bake with CakePHP