Some of the links (*) in this blog post are affiliate links. This means I may earn a commission, at no extra cost to you, if you make a purchase through these links. I only recommend products and services that I personally use and enjoy.
Hetzner Cloud
Create a new instance and choose a basic Ubuntu server.
With Hetzner*, you can always upgrade or downgrade vCPU and RAM, as well as increase disk size. Just keep in mind that you can’t switch to a plan with a smaller disk.
I chose the entry-level server with 2 vCPU, 4 GB RAM, 40 GB SSD, and 20 TB of traffic for under $4 USD per month.
DigitalOcean Cloud
Create a project with DigitalOcean*, and within it, set up a new Droplet with a basic Ubuntu server.
I chose the smallest shared CPU Droplet that meets LibreChat’s minimum requirements: 1 vCPU, 1 GB RAM, 10 GB SSD, and 1 TB transfer for $6 USD per month.
If you’re just experimenting, "password authentication" should be fine; otherwise, always opt for SSH key authentication.
SSH into your server
Once the server is created, SSH into it using:
ssh root@your-server-ip
Update your package manager: Before installing anything, make sure your system is up to date.
apt update && apt upgrade -y
Install Docker: Docker is crucial for containerizing LibreChat.
apt install docker.io -y
Install Docker Compose: Docker Compose simplifies running multi-container Docker applications.
apt install docker-compose -y
Install Git: This might already be installed, but it's essential for cloning the LibreChat repository.
apt install git -y
Clone the LibreChat repository from GitHub:
git clone https://github.com/danny-avila/LibreChat.git
cd LibreChat
Create the .env
file: The .env
file contains environment variables required for LibreChat to function.
cp .env.example .env
nano .env
Add your OpenAI API key: In the .env
file, add your OpenAI API key.
OPENAI_API_KEY=your_openai_api_key
Once the environment is configured, you can use Docker Compose to run LibreChat.
sudo docker-compose up -d
First, ensure your package manager is up to date:
sudo apt update && sudo apt upgrade -y
Then install Nginx and Certbot:
sudo apt install certbot python3-certbot-nginx -y
sudo apt install nginx -y
You'll need to create an Nginx configuration file for LibreChat. Create a new file in /etc/nginx/sites-available/
:
sudo nano /etc/nginx/sites-available/librechat
Paste the following configuration into the file, replacing your_domain.com
with your actual domain:
server {
listen 80;
server_name your_domain.com www.your_domain.com;
location / {
proxy_pass http://localhost:3080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
For a subdomain, only include the subdomain:
server_name subdomain.your_domain.com;
Link the configuration to the sites-enabled
directory:
sudo ln -s /etc/nginx/sites-available/librechat /etc/nginx/sites-enabled/
Before proceeding, ensure your Nginx configuration is valid:
sudo nginx -t
If everything is okay, reload Nginx:
sudo systemctl reload nginx
Set up an A-record for your domain or subdomain and enter your Hetzner* / DigitalOcean* server's IPv4 address. After some propagation time, your domain should point to the LibreChat installation.
Run Certbot to generate SSL certificates:
sudo certbot --nginx -d your_domain.com -d www.your_domain.com
For a subdomain, use:
sudo certbot --nginx -d subdomain.your_domain.com
Enter your email address and accept the Let's Encrypt Terms of Service.
If you're having difficulties with IPv6, you might need to add an A record for IPv6 in your DNS settings.
Certbot should automatically handle renewal, but it's a good idea to test it:
sudo certbot renew --dry-run
In a company setup there is a good chance that you'd like to disable user registration in LibreChat. You can do so by updating the .env
file:
nano .env
ALLOW_REGISTRATION=false
While you are on it make some more adjustments like setting new JWT secrets.
JWT_SECRET=[NEW SECRET]
JWT_REFRESH_SECRET=[ANOTHER NEW SECRET]
Then restart the Docker containers:
sudo docker-compose down
sudo docker-compose up -d
Use the command line to create new users:
sudo docker-compose exec api npm run create-user
Just enter a couple of information about the new user and you are done.
The final optional step is to set up Azure OpenAI. By configuring Azure OpenAI, you can leverage your own custom deployments that offer flexibility to meet specific needs. Azure allows you to create regional deployments, which can provide better response times and data residency options compared to the global standard model.
In the .env
file, you’ll find a section for Azure OpenAI settings:
AZURE_OPENAI_DEFAULT_MODEL=gpt-3.5-turbo
AZURE_OPENAI_MODELS=gpt-3.5-turbo,gpt-4
AZURE_USE_MODEL_AS_DEPLOYMENT_NAME=TRUE
AZURE_API_KEY=
AZURE_OPENAI_API_INSTANCE_NAME=
AZURE_OPENAI_API_DEPLOYMENT_NAME=
AZURE_OPENAI_API_VERSION=
AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME=
These settings are marked as deprecated but will still work for now. However, the recommend way is to use the librechat.yaml
configuration file for azureOpenAI instead. A guide on using the librechat.yaml
file might be worth another blog post.
Below is an example configuration for an Azure endpoint, such as
https://my-openai-resource.openai.azure.com/openai/deployments/my-gpt-deployment-4o-2024-08-06-sweden/chat/completions?api-version=2023-03-15-preview
AZURE_OPENAI_DEFAULT_MODEL=gpt-4o-2024-08-06-sweden
AZURE_OPENAI_MODELS=gpt-4o-2024-08-06-sweden
AZURE_USE_MODEL_AS_DEPLOYMENT_NAME=TRUE
AZURE_API_KEY=MY_API_KEY
AZURE_OPENAI_API_INSTANCE_NAME=my-openai-resource
AZURE_OPENAI_API_DEPLOYMENT_NAME=
AZURE_OPENAI_API_VERSION=2023-03-15-preview
AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME=my-gpt-deployment-4o-2024-08-06-sweden
Setting up a server like this should also involve basic security measures, so don’t forget to secure your setup accordingly.
That’s it! Enjoy exploring your new LibreChat setup.
]]>Laravel provides developers with several multitenancy packages that make it easier to comply with these regulations.
In a multi DB setup an application usually has a "landlord" database that consists - at a bare minimum - of an "accounts" or "tenants" table.
Then for each customer there is a separate database holding all customer specific data tables.
When using Spatie's Laravel-multitenancy package with multi DB setup you might encounter the following error:
Base table or view not found: 1146 Table [landlord.TABLE_NAME] doesn't exist
The source of this error usually stems from the package searching for the table with the unique value in the "landlord" database instead of the "tenant" database, despite the use of the "UsesTenantConnection" concern in the model class.
If you encounter this error, it is recommended to verify if a validation with the "unique" rule is being used.
In Laravel there are a couple of ways to write validation rules:
'email' => ['email', Rule::unique('users')],
'email' => 'email|unique:users',
If you do get the mentioned error try using the model class name instead of the table name.
'email' => ['email', Rule::unique(User::class)],
In the background Laravel takes care of resolving the class name to a table string.
/**
* Resolves the name of the table from the given string.
*
* @param string $table
* @return string
*/
public function resolveTableName($table)
{
if (! str_contains($table, '\\') || ! class_exists($table)) {
return $table;
}
if (is_subclass_of($table, Model::class)) {
$model = new $table;
if (str_contains($model->getTable(), '.')) {
return $table;
}
return implode('.', array_map(function (string $part) {
return trim($part, '.');
}, array_filter([$model->getConnectionName(), $model->getTable()])));
}
return $table;
}
]]>In part 2 we set up the client chat app using Laravel and Vue.js and used Laravel Echo as a wrapper for the Javascript implementation.
There was quite some trial and error involved in getting everything set up.
A lot of it evolved around the websocket server statistics not being updated in the database, thus no statistics graph showing up in the dashboard.
Looking at the GitHub repository of Laravel WebSockets this seems to be a problem among users.
WebSocket Server (local machine)
This is was worked for me on my local machine (WAMP). The exact combination of settings might not even be necessary but I stopped at the point when it was working for me.
In config/websockets.php we obviously have to enable statistics. Statistics should be enabled by default anyway though.
'apps' => [
[
...
'enable_statistics' => true
...
]
]
perform_dns_lookup should be true.
'perform_dns_lookup' => true,
In config/broadcasting.php the host in the pusher connection is set to 127.0.0.1
'pusher' => [
...
'options' => [
...
'host' => '127.0.0.1',
...
],
],
APP_URL in .env needs to be set to the actual URL where your websocket server dashboard is:
APP_URL=http://[YOUR WEBSOCKET SERVER DOMAIN]
This environment variable seems to be used when addressing the route that stores statistics data in the database.
Also clear your config cache via artisan and restart the websocket server.
And don't forget to wait for the first graphs to appear.
Demo Chat app (local machine)
In resources/js/bootstrap the Echo configuration looks like this
window.Echo = new Echo({
...
encrypted: false,
wsHost: '[YOUR WEBSOCKET SERVER DOMAIN]',
wsPort: 6001,
disableStats: true,
enabledTransports: ['ws']
...
});
The axios POST in resources/js/components/Chat.vue calls the endpoint
http://[YOUR WEBSOCKET SERVER DOMAIN]/api/message
WebSocket Server (Digital Ocean, SSL)
This is was worked for me on my Digital Ocean droplet after setting up Let's Encrypt. Again, you might not need all these configuration settings. I haven't tried all combinations but have rather stopped at the point when it was working for me.
In config/websockets.php:
'apps' => [
[
...
'enable_statistics' => true
...
]
],
'statistics' => [
...
'perform_dns_lookup' => true,
...
],
For the SSL configuration I simply use the full paths to the Let's Encrypt cert and private key.
'ssl' => [
'local_cert' => '/etc/letsencrypt/live/[YOUR WEBSOCKET SERVER DOMAIN]/cert.pem',
'local_pk' => '/etc/letsencrypt/live/[YOUR WEBSOCKET SERVER DOMAIN]/privkey.pem',
'passphrase' => null,
'verify_peer' => false,
],
config/broadcasting.php looks like this:
'pusher' => [
...
'options' => [
'host' => '[YOUR WEBSOCKET SERVER DOMAIN]',
'port' => 6001,
'scheme' => 'https',
'curl_options' => [
CURLOPT_SSL_VERIFYHOST => 0,
CURLOPT_SSL_VERIFYPEER => 0,
]
],
],
APP_URL in .env is set to the domain of the websocket server.
APP_URL=https://[YOUR WEBSOCKET SERVER DOMAIN]
Demo Chat app (Digital Ocean, SSL)
Laravel Echo is configured like this in resources/js/bootstrap:
window.Echo = new Echo({
...
encrypted: true,
wsHost: '[YOUR WEBSOCKET SERVER DOMAIN]',
wssHost: '[YOUR WEBSOCKET SERVER DOMAIN]',
wsPort: 6001,
wssPort: 6001,
disableStats: true,
enabledTransports: ['ws', 'wss']
...
});
The axios POST in resources/js/components/Chat.vue calls the endpoint
https://[YOUR WEBSOCKET SERVER DOMAIN]/api/message
You can find the full source code in this GitHub repo and this GitHub repo.
]]>On to building the chat client app:
I am using Laravel again although for this demo we are not really doing much Laravel-wise. It's still nice to have for quick scaffolding.
Furthermore in a real-world real-time application we would definitely store some data in a database or have an admin dashboard, thus processing some data server-side in our client app.
laravel new websocket-chat-demo
We will use Vue.js. The easiest way to pull that into Laravel is
composer require laravel/ui
php artisan ui vue
npm install
We also need two Javascript packages to talk to our websocket server.
npm install pusher-js laravel-echo
With Laravel Echo it's easy to deal with channel subscriptions and to listen to events.
Let's generate our public/js/app.js file that we can then reference in our app.
npm run dev
We are making use of the auto-generated resources/views/welcome.blade.php and rename it to chat.blade.php. As a consequence we also have to modify the view name in route/web.php.
In chat.blade.php we add some basic Bulma-powered layout.
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Websocket Chat Demo</title>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bulma@0.8.0/css/bulma.min.css">
</head>
<body>
<section class="section">
<div id="app" class="container">
<chat></chat>
</div>
</section>
<script src="{{ asset('js/app.js') }}"></script>
</body>
</html>
The chat element will later be used by our main Vue Single File Component.
In resources/js/bootstrap.js we comment out popper.js, jQuery and Bootstrap since we don't need them for this demo.
What we need though is Laravel Echo.
So we remove the comments from the Echo section and change it to
import Echo from 'laravel-echo';
window.Pusher = require('pusher-js');
window.Echo = new Echo({
broadcaster: 'pusher',
key: process.env.MIX_PUSHER_APP_KEY,
cluster: process.env.MIX_PUSHER_APP_CLUSTER,
encrypted: false,
wsHost: process.env.MIX_WEBSOCKET_HOST,
wsPort: process.env.MIX_WEBSOCKET_PORT,
disableStats: true,
enabledTransports: ['ws']
});
I have added MIX_WEBSOCKET_HOST and MIX_WEBSOCKET_PORT to .env to be able to adjust these values in one central location.
The host should be [YOUR WEBSOCKET SERVER DOMAIN].
MIX_PUSHER_APP_KEY and MIX_PUSHER_APP_CLUSTER are generated via the PUSHER_APP_KEY and PUSHER_APP_CLUSTER in .env. So don't forget to enter the same data that is used in the chat app configuration in the websocket server app.
Now we can start to build three Vue Single File Components.
Our own messages are usually on the right and the messages of the other users are on the left.
So let's start from the bottom up.
For our own messages we create a resources/js/components/MyMessage.vue file.
<template>
<div class="is-clearfix">
<div class="notification is-primary is-pulled-right">{{ message }}</div>
</div>
</template>
<script>
export default {
props: [
'message'
]
}
</script>
<style scoped>
.notification {
max-width: 80%;
text-align: right;
margin-bottom: 1em;
}
</style>
Nothing crazy happening here.
It only accepts one prop which is the message and displays it.
I have added little bit of scoped CSS to align the text to the right and provide some spacing.
The message component in resources/js/components/Message.vue for the other users is almost the same:
<template>
<div class="is-clearfix">
<div class="notification is-info is-pulled-left">
<small>Sent by {{ user }}</small><br />
{{ message }}
</div>
</div>
</template>
<script>
export default {
props: [
'message',
'user'
]
}
</script>
<style scoped>
.notification {
max-width: 80%;
margin-bottom: 1em;
}
small {
color: #ccc;
font-size: 0.65em;
}
</style>
Here we are also accepting the name of the user and display it.
On to the Chat component that implements the main logic and makes use of our message components.
<template>
<div>
<div class="box">
<p v-if="!messages.length">Start typing the first message</p>
<div v-for="message in messages">
<my-message
v-if="message.user == userId"
:message="message.text"
></my-message>
<message
v-if="message.user != userId"
:message="message.text"
:user="message.user"
></message>
</div>
</div>
<form @submit.prevent="submit">
<div class="field has-addons has-addons-fullwidth">
<div class="control is-expanded">
<input class="input" type="text" placeholder="Type a message" v-model="newMessage">
</div>
<div class="control">
<button type="submit" class="button is-danger" :disabled="!newMessage">
Send
</button>
</div>
</div>
</form>
</div>
</template>
<script>
export default {
data () {
return {
userId: Math.random().toString(36).slice(-5),
messages: [],
newMessage: ''
}
},
mounted () {
Echo.channel('chat')
.listen('NewChatMessage', (e) => {
if(e.user != this.userId) {
this.messages.push({
text: e.message,
user: e.user
});
}
});
},
methods: {
submit() {
axios.post(`${process.env.MIX_WEBSOCKET_SERVER_BASE_URL}/api/message`, {
user: this.userId,
message: this.newMessage
}).then((response) => {
this.messages.push({
text: this.newMessage,
user: this.userId
});
this.newMessage = '';
}, (error) => {
console.log(error);
});
}
}
}
</script>
Apart from using some Bulma stuff here we are doing acouple of things:
The userId is just a random 5-characters string that stays the same as long as we don't reload our browser window. For our demo that is perfectly fine since we are not storing any chat messages anyway.
The messages array will be populated with message objects from our own messages as well as with messages that we retrieve from the websocket server.
The newMessage is bound to the input field via v-model.
On submitting the form / sending a new message we hit the API endpoint that we have created in the websocket server app which in turn fires the NewChatMessage event.
Don't forget to add
MIX_WEBSOCKET_SERVER_BASE_URL=http://[YOUR WEBSOCKET SERVER DPOMAIN]
to .env.
By pushing the new message object to the messages array our own messages will be directly visible in the chat window.
As soon as the Vue instance is mounted we use Laravel Echo to subscribe to our websocket channel and listen to the NewChatMessage event.
Every new chat message will be pushed onto the messages array.
In order to not display our own messages twice we make sure that only the messages of the other users are added to the array.
Using some v-for and v-if in our template takes care of displaying all messages in the right place.
Run
npm run dev
and you should be good to go.
You can find the full source code in this GitHub repo.
Also have a look at part 3 which dives a little bit deeper into a couple of configurations that worked for me in my local development environment and on a DigitalOcean droplet.
]]>You could do a lot of polling on the client side. But that becomes unstable pretty fast. So running your own WebSocket server can be an attractive alternative. Obviously the downside of that is that now your company is in charge of maintenance and of keeping the service running. Services like Pusher.com on the other hand are specialized on exactly that.
But in this article we will see how easy it is to run your own WebSocket server with PHP only. And Javascript on the frontend of course. To be specific we are going to use Laravel and VueJs.
We will use the amazing package laravel-websockets by Marcel Pociot and Freek Van der Herten which lets us run our own WebSocket server by taking away a lot of the "pain" to set one up.
This package has multi-tenancy support by default. So basically you have two options:
We opt for option number 1 and separate the WebSocket server and our web project. For our demo we will develop a very simple chat app which kind of makes sense to showcase realtime functionality.
So this is what we are going to do:
In part 1 (this post) we are setting up the websocket server application and provide an API endpoint for our client chat app.
In part 2 we are setting up the client chat app using Laravel and Vue.js and use Laravel Echo as a wrapper for the Javascript implementation.
Part 3 will deep dive a little bit into a couple of configurations that worked for me in my local development environment and on a DigitalOcean droplet.
So let's start by creating a new Laravel app for the Laravel WebSocket server.
laravel new websocket-server
Next we set up a database and add the credentials into our .env file.
The WebSocket server package comes with a statistics dashboard which by default is only accessible in local environment. To access the dashboard in production enviroment we will have to make a small change later. Also we will add some login functionality to protect the dashboard from non-authorized view.
composer require laravel/ui
php artisan ui vue --auth
Time to compile our frontend scaffolding:
npm install && npm run dev
Since we don't need any registration functionality we can just replace
Auth::routes();
with
Auth::routes(['register' => false]);
in our routes/web.php file.
We need a user to log in with.
So let's run our migrations to generate the users table.
php artisan migrate
If you encounter a "specified key was too long" error just make some minor adjustments to your AppServiceProvider.php file and add
Schema::defaultStringLength(191);
to the boot method and don't forget to add
use Illuminate\Support\Facades\Schema;
to the head of the file.
I am doing a quick php artisan tinker to create a user.
$user = new App\User
$user->name = 'admin'
$user->email = '<EMAIL>'
$user->password = Hash::make('<PASSWORD>')
$user->save();
It's time to install the laravel-websockets package.
composer require beyondcode/laravel-websockets
The migration file for the statictics database table can be generated with
php artisan vendor:publish --provider="BeyondCode\LaravelWebSockets\WebSocketsServiceProvider"
--tag="migrations"
Finally we have to run our migrations again
php artisan migrate
We also need the package's config file. So let's generate that:
php artisan vendor:publish --provider="BeyondCode\LaravelWebSockets\WebSocketsServiceProvider"
--tag="config"
The most important part in the config file is the apps array. As already mentioned the laravel-websockets package comes with multi-tenancy functionality out of the box. So this is the place to add as many apps as you need.
Although we are not using the pusher.com service we have to set some values for
in the .env file.
This is because the laravel-websockets package is fully compatible with the Pusher API and we can make use of it. So what we are basically doing is using fake IDs. We just have to be concise with it across our project.
This is how my apps array looks for now. I have added some more environment variables to be more flexible.
'apps' => [
[
'id' => env('PUSHER_APP_ID'),
'name' => env('PUSHER_APP_NAME'),
'key' => env('PUSHER_APP_KEY'),
'secret' => env('PUSHER_APP_SECRET'),
'path' => env('PUSHER_APP_PATH'),
'capacity' => null,
'enable_client_messages' => false,
'enable_statistics' => env('PUSHER_APP_ENABLE_STATICTICS'),
],
],
The statistics dashboard of the Websocket server should be available under [YOUR DOMAIN]/laravel-websockets.
You can change this path in config/websockets.php.
As mentioned before the dashboard by default is only reachable in local environment.
Here you can read more about it and how to change it.
In config/websockets.php we also add the 'auth:web' middleware to the middleware array to protect the dashboard.
...
'middleware' => [
'web',
Authorize::class,
'auth:web'
],
...
I have added another environment variable in .env
LARAVEL_WEBSOCKETS_PERFORM_DNS_LOOKUP=true
and use it in config/websockets.php
'perform_dns_lookup' => env('LARAVEL_WEBSOCKETS_PERFORM_DNS_LOOKUP', true),
in order to have the websockets package write statistics into the database.
We need the official Pusher PHP SDK but this package should be installed already while installing the websocket server package. If not just do
composer require pusher/pusher-php-server "~4.0"
Since we want to broadcast events from Laravel by using the Pusher API we have to change
BROADCAST_DRIVER=log
to
BROADCAST_DRIVER=pusher
in .env.
Naturally the pusher SDK would assume to broadcast to the pusher.com server. But we are not using that. So we have to make some more modifications in "options" in the pusher connection config in config/broadcasting.php.
...
'pusher' => [
'driver' => 'pusher',
'key' => env('PUSHER_APP_KEY'),
'secret' => env('PUSHER_APP_SECRET'),
'app_id' => env('PUSHER_APP_ID'),
'options' => [
'cluster' => env('PUSHER_APP_CLUSTER'),
'encrypted' => env('PUSHER_APP_ENCRYPTED'),
'host' => env('PUSHER_APP_HOST'),
'port' => env('PUSHER_APP_PORT'),
'scheme' => env('PUSHER_APP_SCHEME')
],
],
...
Again I have added more environment variables.
PUSHER_APP_ENCRYPTED=false
PUSHER_APP_HOST="127.0.0.1"
PUSHER_APP_PORT=6001
PUSHER_APP_SCHEME="http"
Next it's time to create an event to broadcast.
php artisan make:event NewChatMessage
This is how the NewChatMessage event looks like
<?php
namespace App\Events;
use Illuminate\Broadcasting\Channel;
use Illuminate\Broadcasting\InteractsWithSockets;
use Illuminate\Broadcasting\PresenceChannel;
use Illuminate\Broadcasting\PrivateChannel;
use Illuminate\Contracts\Broadcasting\ShouldBroadcast;
use Illuminate\Foundation\Events\Dispatchable;
use Illuminate\Queue\SerializesModels;
class NewChatMessage implements ShouldBroadcast
{
use Dispatchable, InteractsWithSockets, SerializesModels;
public $message;
public $user;
/**
* Create a new event instance.
*
* @return void
*/
public function __construct($message, $user)
{
$this->message = $message;
$this->user = $user;
}
/**
* Get the channels the event should broadcast on.
*
* @return \Illuminate\Broadcasting\Channel|array
*/
public function broadcastOn()
{
return new Channel('chat');
}
}
Our event accepts a message and a user/author and implements ShouldBroadcast.
We are using Channel as opposed to PrivateChannel for our demo. In a real-world application your users would probably have to login into their chat app and you likely would use private channels.
Finally it's time to start our websocket server
php artisan websocket:serve
Now in our dashboard under [YOUR DOMAIN]/laravel-websockets we can select the client app and press Connect.
The last thing we need (websocket-)server-side is an API endpoint where our Laravel client chat app can send chat messages to.
Add a route to routes/api.php
Route::post('/message', 'MessageController@broadcast');
and create a controller
php artisan make:controller MessageController
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Events\NewChatMessage;
class MessageController extends Controller
{
public function broadcast(Request $request) {
if (! $request->filled('message')) {
return response()->json([
'message' => 'No message to send'
], 422);
}
// TODO: Sanitize input
event(new NewChatMessage($request->message, $request->user));
return response()->json([], 200);
}
}
For quick tests of HTTP endpoints I like using a Visual Studio extension called REST client.
Just create a file test.http in your root folder and tell Git to ignore it in .gitignore if you like.
test.http could look like this:
POST [YOUR DOMAIN]/api/message
Content-Type: application/json
Accept: application/json
{
"message": "Test message",
"user": "Testuser"
}
If the Send request link does not show up add a couple of empty lines at the top of the file.
Sending the test request should now trigger our event and be broadcast to our websocket server.
It should popup in the Events section of [YOUR DOMAIN]/laravel-websockets.
That's it server-side.
Let's proceed with part 2 of this article.
You can find the full source code in this GitHub repo.
]]>Now let's proceed to the fun part by starting a new Laravel project that provides a video file upload to a S3 bucket and starts a label detection analysis with Amazon Rekognition. We will write a command that checks for the results of the video analysis and can be executed by a cronjob. The results will be displayed in a list of uploaded videos and the corresponding detail pages.
Hint: I am using a Homestead box to run my environment. If you are doing the same make sure to use "schedule: true" in your Homestead file in order to run the cronjob.
Let's create a new Laravel project "RekognitionTest" from the command line and modify the .env
file with the credentials of a database of your choice.
Next we need to define some routes in routes/web.php
.
The homepage shows a list of all videos that we have uploaded for analysis and if the analysis process has already been completed.
The results page displays the detailed results of the analysis of each video.
The upload GET route shows the upload form while the upload POST route stores info in the database and uploads the video to S3.
// routes/web.php
Route::get('/', 'RekognitionController@index');
Route::get('/results', 'RekognitionController@results');
Route::get('/upload', 'RekognitionController@upload');
Route::post('/upload', 'RekognitionController@store');
We'll use composer to install the packages that we need to get things running: The AWS PHP SDK, the drivers to handle S3 uploads in Laravel's filesystem flysystem as well as a cached adapter for flysystem to speed things up.
composer require aws/aws-sdk-php league/flysystem-aws-s3-v3 league/flysystem-cached-adapter
Next, we will add a couple of configuration values in the .env
file. We should have all these values ready by now. If not please have a look at part 1 of this tutorial. Don't forget to modify the info according to your AWS setup.
AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXXXX
AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
FILESYSTEM_CLOUD=s3
AWS_DEFAULT_REGION='eu-west-1'
AWS_BUCKET='rekbucket4711'
AWS_SNS_TOPIC_ARN='arn:aws:sns:eu-west-1:xxxxxxxxxxxx:RekognitionTopic'
AWS_IAM_ROLE_ARN='arn:aws:iam::xxxxxxxxxxxx:role/RekognitionRole'
The AWS PHP SDK will automatically make use of the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY values.
The other values are being used explicitly in config/filesystems.php
or when we are working with the AWS PHP SDK.
Model and migration wise we can keep things simple with just one database table. So let's create the Video model and its database migration with just one Laravel Artisan command.
php artisan make:model Video -m
In the Video migration we are using LONGTEXT as the data type for the 'results' fields since the analysis results for a video can become pretty large. The LONGTEXT offers about the same space as the JSON data type which could be an interesting alternative in a more advanced real-life application. By using the JSON datatype we would be able to query JSON directly with SQL.
<?php
use Illuminate\Support\Facades\Schema;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Database\Migrations\Migration;
class CreateVideosTable extends Migration
{
/**
* Run the migrations.
*
* @return void
*/
public function up()
{
Schema::create('videos', function (Blueprint $table) {
$table->increments('id');
$table->string('name');
$table->string('original_name');
$table->string('aws_job_id');
$table->tinyInteger('analyzed')->default(0)->index();
$table->longText('results')->nullable();
$table->timestamps();
});
}
/**
* Reverse the migrations.
*
* @return void
*/
public function down()
{
Schema::dropIfExists('videos');
}
}
php artisan migrate
In our Video model we are adding the 'results' column to the $casts array and declare its type to 'array'. That way the results we are retrieving from Amazon Rekognition are automatically serialized into JSON when we store them in our database. When we retrieve them from the database they are automatically deserialized to a PHP array.
<?php
namespace App;
use Illuminate\Database\Eloquent\Model;
class Video extends Model
{
/**
* The attributes that should be cast to native types.
*
* @var array
*/
protected $casts = [
'results' => 'array'
];
}
Next, we need a view to display our file upload form. I am using Tailwind CSS but feel free to use any "framework" or none at all.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<link href="https://cdn.jsdelivr.net/npm/tailwindcss/dist/tailwind.min.css" rel="stylesheet">
<title>Amazon Rekognition Test</title>
</head>
<body class="bg-grey-lighter h-screen font-sans">
<div class="container mx-auto h-full flex justify-center items-center">
<div class="w-1/3">
<h1 class="font-hairline mb-6 text-center">Amazon Rekognition Test</h1>
<form action="/upload" method="post" enctype="multipart/form-data">
<input type="hidden" name="_token" value="{{ csrf_token() }}">
<div class="border-teal p-8 border-t-12 bg-white mb-6 rounded-lg shadow-lg">
@if (session('success'))
<div class="border-green p-4 text-green">
{{ session('success') }}
</div>
@endif
@if(count($errors) > 0)
<div class="border-red p-4 text-red">
Something went wrong<br><br>
<ul>
@foreach ($errors->all() as $error)
<li>{{ $error }}</li>
@endforeach
</ul>
</div>
@endif
<div class="mb-4">
<label class="font-bold text-grey-darker block mb-2">Video</label>
<input type="file" name="file" class="block appearance-none w-full bg-white border border-grey-light hover:border-grey px-2 py-2 rounded shadow">
</div>
<div class="flex items-center justify-between">
<button class="bg-teal-dark hover:bg-teal text-white font-bold py-2 px-4 rounded">
Upload
</button>
</div>
</div>
</form>
</div>
</div>
</body>
</html>
To process the uploaded file we are going to need a controller as defined in our routes file.
php artisan make:controller RekognitionController
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use Aws\Rekognition\RekognitionClient;
use App\Video;
class RekognitionController extends Controller
{
/**
* Show the list of uploaded videos
*
* @return View
*/
public function index() {
// to be added later
}
/**
* Shows the analysis results for a video
*
* @param Request $request
* @return View
*/
public function results($id) {
// to be added later
}
/**
* Show the upload form
*
* @return View
*/
public function upload() {
return view('upload');
}
/**
* Upload a video to S3 and store info in local DB
*
* @param Request $request
* @return Response
*/
public function store(Request $request) {
$request->validate([
'file' => 'required|file|max:5120|mimes:mp4',
]);
$originalFilename = $request->file->getClientOriginalName();
$fileExtension = $request->file->getClientOriginalExtension();
$uniqueFilename = str_random(32) . '.' . $fileExtension;
$path = $request->file->storeAs('', $uniqueFilename, 's3');
$client = new RekognitionClient([
'region' => env('AWS_DEFAULT_REGION', 'eu-west-1'),
'version' => 'latest'
]);
$result = $client->startLabelDetection([
'ClientRequestToken' => str_random(),
'JobTag' => 'rekognition-test',
'MinConfidence' => 50,
'NotificationChannel' => [
'RoleArn' => env('AWS_IAM_ROLE_ARN'),
'SNSTopicArn' => env('AWS_SNS_TOPIC_ARN'),
],
'Video' => [
'S3Object' => [
'Bucket' => env('AWS_BUCKET'),
'Name' => $uniqueFilename
],
],
]);
$video = new Video;
$video->name = $uniqueFilename;
$video->original_name = $originalFilename;
$video->aws_job_id = $result->get('JobId');
$video->save();
// dd($result);
return back()
->with('success','Video has been successfully uploaded');
}
}
We are doing a couple of things here. First we are validating the input file. For this demo the uploaded file needs to be an MP4 format (Amazon Rekognition can handle .mov too) of max 5 MB size. Next we are generating a random filename for the video and upload it to our S3 bucket. To use Amazon Rekognition the AWS PHP SDK provides some nice methods. First we need to initialize a RekognitionClient which is then used in the startLabelDetection method. The startLabelDetection method asks for the video that should be processed. So we are providing the S3 bucket name as well as the video's filename. We also need to define a NotificationChannel so that Amazon Rekognition knows who or what to notify about the results of the analysis. For our demo the ClientRequestToken will be a random otherwise Amazon Rekognition would see every request as the same job and return the same JobId over and over. The JobTag is just an identifier that Rekognition sends to SNS together with the results. We don't really need that for our small demo. MinConfidence tells AmazonRekognition how sure it has to be in its analysis in order to send us back any results.
As a result of the startLabelDetection method we are getting back some JSON. For us the only important information is the JobId which we are saving together with the original filename of the video, its current random filename and some timestamps to our database. After that we are returning to our upload form.
Next, let's write a command that tries to retrieve the results from Amazon Rekognition and that we start manually or by using a cronjob.
php artisan make:command GetRekognitionResults
<?php
namespace App\Console\Commands;
use Illuminate\Console\Command;
use Aws\Rekognition\RekognitionClient;
use App\Video;
class GetRekognitionResults extends Command
{
/**
* The name and signature of the console command.
*
* @var string
*/
protected $signature = 'rekognition:get-results';
/**
* The console command description.
*
* @var string
*/
protected $description = 'Retrieves the Rekognition video analysis results';
/**
* Create a new command instance.
*
* @return void
*/
public function __construct()
{
parent::__construct();
}
/**
* Execute the console command.
*
* @return mixed
*/
public function handle()
{
$videos = Video::where('analyzed', '<>', 1)
->orderBy('created_at', 'ASC')
->get();
if($videos->isNotEmpty()) {
$client = new RekognitionClient([
'region' => env('AWS_DEFAULT_REGION', 'eu-west-1'),
'version' => 'latest'
]);
foreach ($videos as $srcVideo) {
$result = $client->getLabelDetection([
'JobId' => $srcVideo->aws_job_id
]);
$this->info('Checking video ' . $srcVideo->aws_job_id . ' ' . $srcVideo->original_name);
if($result->get('JobStatus') == 'SUCCEEDED') {
$this->info('Video analysis results retrieved for ' . $srcVideo->aws_job_id . ' ' . $srcVideo->original_name);
$video = Video::find($srcVideo->id);
$video->results = $result->get('Labels');
$video->analyzed = 1;
$video->save();
}
}
}
}
}
This command loops through all video that have not been processed yet. As before we have to initialize RekognitionClient again. This time we are using getLabelDetection from the AWS PHP SDK and pass it the corresponding JobId for each video. If we get a JobStatus "SUCCEEDED" from the result we can update the video's database record by adding the results of the analysis and by setting the analyzed flag to TRUE.
We can now add our command to the app/Console/Kernel.php
and setup a cronjob on our server.
<?php
/**
* Define the application's command schedule.
*
* @param \Illuminate\Console\Scheduling\Schedule $schedule
* @return void
*/
protected function schedule(Schedule $schedule)
{
$schedule->command('rekognition:get-results')->everyMinute();
}
All that is left now is to build two more views and modify our controller to serve these views.
<?php
/**
* Show the list of uploaded videos
*
* @return View
*/
public function index() {
$videos = Video::orderBy('created_at', 'desc')->get();
return view('index', [
'videos' => $videos
]);
}
/**
* Shows the analysis results for a video
*
* @param Request $request
* @return View
*/
public function results($id) {
$video = Video::find($id);
return view('results', [
'video' => $video
]);
}
The index view showing a list of all videos could be as follows:
@extends('layouts.app')
@section('content')
<div class="p-4">
<h1 class="font-hairline mb-4">Amazon Rekognition Test</h1>
<p class="mb-4"><a href="{{ url('/upload') }}" class="bg-teal-dark hover:bg-teal text-white font-bold py-2 px-4 rounded no-underline">Upload new video</a></p>
<div class="border-teal p-8 border-t-12 bg-white mb-6 rounded-lg shadow-lg">
@if($videos->isNotEmpty())
<table class="table-auto">
<tr>
<th class="p-4 border-teal border-solid border-b-2 text-left">Video</th>
<th class="p-4 border-teal border-solid border-b-2 text-left">Uploaded at</th>
<th class="p-4 border-teal border-solid border-b-2 text-left">Analyzed</th>
<th class="p-4 border-teal border-solid border-b-2"></th>
</tr>
@foreach($videos as $video)
<tr>
<td class="p-4 text-left">{{ $video->original_name }}</td>
<td class="p-4 text-left">{{ $video->created_at }}</td>
<td class="p-4 text-left">{{ $video->analyzed ? $video->updated_at : '—' }}</td>
<td>
@if($video->analyzed)
<a href="{{ url('/results', ['id' => $video->id ]) }}" class="bg-teal-dark hover:bg-teal text-white font-bold py-2 px-4 rounded no-underline">Result</a>
@endif
</td>
</tr>
@endforeach
</table>
@else
<p>No videos avaialble</p>
@endif
</div>
</div>
@endsection
And lastly, we need a view to display the detailed results of the video analysis. For our demo we are simply showing a table with the found labels, the timestamps at which they occurred in the video as well as the confidence level that Amazon Rekognition has regarding each label.
@extends('layouts.app')
@section('content')
<div class="p-4">
<h1 class="font-hairline mb-4">Amazon Rekognition Test</h1>
<h2 class="font-hairline mb-4">{{ $video['original_name'] }}</h2>
<p class="mb-4"><a href="{{ url('/') }}" class="bg-teal-dark hover:bg-teal text-white font-bold py-2 px-4 rounded no-underline">Back</a></p>
<div class="border-teal p-8 border-t-12 bg-white mb-6 rounded-lg shadow-lg">
@if(!empty($video['results']))
<table class="table-auto">
<tr>
<th class="p-4 border-teal border-solid border-b-2 text-left">Timestamp</th>
<th class="p-4 border-teal border-solid border-b-2 text-left">Label</th>
<th class="p-4 border-teal border-solid border-b-2 text-left">Confidence</th>
</tr>
@foreach($video['results'] as $result)
<tr>
<td class="p-4 text-left">{{ $result['Timestamp'] }}</td>
<td class="p-4 text-left">{{ $result['Label']['Name'] }}</td>
<td class="p-4 text-left">{{ $result['Label']['Confidence'] }}</td>
</tr>
@endforeach
</table>
@else
<p>No results avaialble</p>
@endif
</div>
</div>
@endsection
That's it. Of course label detection is not the only feature of Amazon Rekognition. The service also offers Facial Recognition, Unsafe Content Detection and more.
You can find the full source code in this GitHub repo.
]]>The initial setup and the asynchronous workflow of the recognition workflow can be a little bit tricky at first though.
As always there is more than one way to achieve what we want within the vast amount of AWS services and with PHP.
So this is what we are going to do:
In part 1 (this post) we are setting up Amazon Rekognition, IAM, S3, Simple Notification Service (SNS) and deal with the necessary rights.
In part 2 we are starting a new Laravel project that provides a video file upload to an S3 bucket and starts a label detection analysis with Amazon Rekognition.
We will write a command that checks for the results of the video analysis and that can be executed by a cronjob.
The results will be displayed in a list of uploaded videos and the corresponding detailed info pages.
Note: For the sake of this tutorial, we are using full access rights for the various Amazon services. In a productive environment we would want to restrict rights to what is really necessary.
Note: We are using the same AWS region for everything we are doing on AWS. Otherwise Amazon Rekognition won't be able to communicate with our S3 bucket.
First let's add a new user in IAM.
Let's switch to Amazon S3 and create a bucket. Other than with image analysis videos processed by Amazon Rekognition have to be stored in S3.
Next we will configure the Simple Notification Service (SNS).
SNS or Simple Notification Service is a service that receives the analysis results from the Amazon Rekognition process. The PHP SDK makes it easy to check for the analysis progress and get the results when finished. We will do that using a cronjob.
In a real live application there are a couple of advanced ways of handling this.
We could have SNS push the rekognition results to our own HTTP endpoint and process them there.
We could couple a Simple Queue Service (SQS) with our SNS and then benefit from being able to pull data from SQS.
We could connect AWS Lambda with the SNS and write a Lambda function that deals with the results.
To use SNS we have to create a SNS topic first.
To grant Amazon Rekognition publishing permissions to the Amazon SNS topic we have to apply an IAM role to it and adjust some permissions.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iam:PassRole"
],
"Resource": "arn:aws:iam::166755042369:role/RekognitionRole"
}
]
}
That's it.
We should now be ready to proceed with the fun part in Laravel in part 2 of this tutorial.
If you want to follow the progress of the Google Maps package the Flutter team is currently working on have a look here: https://pub.dartlang.org/packages/google_maps_flutter
While you can already use this developers preview one alternative for now is the map_view package developed by AppTree Software. This package supports markers, polylines, polygons among others and from my experience runs very stable.
Update February 2019: This plugin is no longer under development. There are a couple of forks though. The one I am using is https://github.com/Eimji/flutter_google_map_view.
It comes with a caveat though. The map opens as a full screen plugin and is not embedded in the Flutter widget hierarchy. You won't be able to show a smaller version of the map or overlay it with some custom buttons. You can opt to not show the appbar or if you do your customization options are limited.
Therefore the objective is to adapt the map's appbar as much as possible to the appbar styling you are using throughout your app.
To achieve that in Android we have to modify the AndroidManifest.xml and add some drawables.
After you have followed the package's installation steps open the android/app/src/main/AndroidManifest.xml
and change
<activity
android:name="com.apptreesoftware.mapview.MapActivity"
android:theme="@style/Theme.AppCompat.Light.DarkActionBar"
/>
to
<activity
android:name="com.apptreesoftware.mapview.MapActivity"
android:theme="@style/CustomActionBarTheme"
/>
Next let's define two colors, a background color and a text color, in a new file colors.xml
in android/app/src/main/res/values
. Modify the color values to match your app theme.
<?xml version="1.0" encoding="utf-8"?>
<resources>
<color name="custom_action_bar_background_color">#1565c0</color>
<color name="custom_action_bar_text_color">#FFFFFF</color>
</resources>
We'll also need a new drawable actionbar_background.xml
in android/app/src/main/res/drawable
:
<?xml version="1.0" encoding="utf-8"?>
<layer-list xmlns:android="http://schemas.android.com/apk/res/android">
<item android:drawable="@color/custom_action_bar_background_color" />
</layer-list>
Lastly to put everything together we create a themes.xml
in android/app/src/main/res/values
. This snippet is taken directly from the Android documentation with some adaptations for our use case.
<?xml version="1.0" encoding="utf-8"?>
<resources>
<!– the theme applied to the application or activity –>
<style name="CustomActionBarTheme"
parent="@style/Theme.AppCompat.Light">
<item name="android:actionBarStyle">@style/MyActionBar</item>
<item name="android:actionMenuTextColor">@color/custom_action_bar_text_color</item>
<!– Support library compatibility –>
<item name="actionBarStyle">@style/MyActionBar</item>
<item name="actionMenuTextColor">@color/custom_action_bar_text_color</item>
</style>
<!– ActionBar styles –>
<style name="MyActionBar"
parent="@style/Widget.AppCompat.Light.ActionBar.Solid.Inverse">
<item name="android:background">@drawable/actionbar_background</item>
<item name="android:titleTextStyle">@style/MyActionBarTitleText</item>
<!– Support library compatibility –>
<item name="background">@drawable/actionbar_background</item>
<item name="titleTextStyle">@style/MyActionBarTitleText</item>
</style>
<!– ActionBar title text –>
<style name="MyActionBarTitleText"
parent="@style/TextAppearance.AppCompat.Widget.ActionBar.Title">
<item name="android:textColor">@color/custom_action_bar_text_color</item>
<!– The textColor property is backward compatible with the Support Library –>
</style>
</resources>
As a result you should have something like this:
While this is by far not the perfect way to handle our customization needs it serves well as a temporary workaround until the Flutter team publishes its first stable version of a Flutter Google Maps package.
You can find the full source code in this GitHub repo.
]]>Note: I haven't tested this demo on an iOS device or simulator. But it should work fine if you follow the installation steps.