The past few weeks the PHP world is abuzz with Expose, an Open Source alternative to ngrok. If you're not familiar with either, they both let you share a site hosted on your local web server using a subdomain that can be accessed over the Internet. This is great for sharing a site with your client, testing HTTPS behavior with a valid TLS certificate, developing social media / SSO integrations, testing sites on real mobile devices etc.
Expose lets you run your own server for the full white label experience. It's also free of charge – except for the cost of running your own server, of course. In this article I'll tell you how to create your own Expose server and share your Joomla site with it.
Ingredients
For this article we'll be using the following technical components.
- A domain name you own. Nothing else should be hosted on it. Expose is going to be creating arbitrary subdomains on it. For illustration purposes I will be using the domain
example.com
. Wherever you seeexample.com
substitute your domain name. - Ubuntu Server 20.04 hosted on Linode. I chose Linode because at $5 per month it's dead cheap. You can use any live server, directly connected to the Internet, under a domain name that's under your full control, where you can install and run software (you have root access).
- Amazon Route 53 for DNS hosting. I chose it because it's cheap and it can be automated for authenticating with Let's Encrypt. You can of course use whichever DNS hosting you prefer as long as you can create arbitrary records.
- Let's Encrypt for free TLS (HTTPS) certificates. We will create a star certificate, valid for all subdomains. If you already have a commercially bought and rather expensive star certificate you can use that one instead.
- NginX for TLS proxying. Expose itself doesn't support TLS so we'll be using NginX as our TLS terminator.
- Supervisor to keep Expose up and running on our server.
- PHP on the server and locally. Considering that Expose is written in PHP it makes sense.
- Composer to install Expose.
- Expose itself on both the server and the client (locally).
For the local server I assume that you're running Linux, macOS or a similar Operating System, or Windows with WSL (Windows Subsystem for Linux).
Side note. As with all my articles I give you instructions using specific technologies which I use and are somewhat beginner friendly. Sure, I use Nano in my examples, you can use Vim, Emacs or even Visual Studio Code over SFTP. I use Linode, you might want to use Amazon EC2 or DigitalOcean. I use Route 53, you may want to use Dyn. Just bear in mind that you can use whatever you want but I can only tell you how I configured what I used, not what you used. Please neither start a flame war on which technology I chose to use nor ask me to help you with something I don't use. Both are unproductive and mutually frustrating experiences. Thank you in advance!
Without further ado, let's start building our Expose server.
Spin up a server
We start by creating a new Ubuntu Server 20.04 server. I used Linode to create a new "nanode", the smallest instance they make available. It only gets 512MB RAM and an anaemic, shared processor core but it's more than enough for the simple task we have at hand.
For the distribution I chose Ubuntu Server 20.04 since this is the distro I am most familiar with.
If you follow the same path stop after creating the new server. When it comes online go to your Linode control panel and note down its IPv4 and IPv6 addresses. We'll need them in a few minutes.
Set up DNS
You don't have to use Amazon Route 53 but it makes life easier. Automating issuing of TLS certificates with Let's Encrypt is much simpler if you can automate the requisite DNS changes for domain verification and for yours truly there's nothing simpler than using Amazon Route 53.
Here's what to do:
- Go to the Route 53 page on the AWS console
- Click on ‘Hosted Zones’
- Click ‘Create Hosted Zone’
- Use your domain name. As I explained earlier I will be using
example.com
for illustration purposes. - Create an A record with the IPv4 address you noted down when creating the server.
- Create an AAAA record with the IPv6 address you noted down when creating the server.
- Create a CNAME
expose.example.com
pointing back to your domain name,example.com
. This will be our Expose server's control panel subdomain. - Create a CNAME
*.example.com
pointing back to your domain name,example.com
. This allows Expose to use arbitrary, random subdomains to share our local sites without having to explicitly define them in the DNS zone one by one. Neat, huh?
Do keep in mind that some TLDs such as .dev
are marked as ‘secure’ by browsers. This means that they can only be accessed by a web browser using HTTPS. If you try using them with plain old HTTP they will fail. In my humble opinion, it makes sense NOT to use such a domain if you're interested in developing features or troubleshooting issues with features that have to do with HTTP to HTTPS redirections.
At this point remember to update your domain registrar with the Amazon DNS servers (listed in the uneditable NS record in Route 53). It might take a few minutes to several hours for these changes to be propagated.
If unsure whether the changes propagated run
dig example.com @8.8.8.8 ns
If the changes propagated you should see the Amazon nameservers being listed. Do NOT proceed with the rest of the instructions until the DNS nameserver change has propagated.
Change the server's hostname
SSH into the Linode e.g. ssh
and run
hostnamectl set-hostname example.com
nano /etc/hosts
After the first line of the file (127.0.0.1 localhost
) add a new one with your domain:
127.0.0.1 example.com
Press CTRL-X, then Y, then ENTER to save and exit.
Install dependencies
First, let's make sure that our server is up-to-date:
apt upgrade && apt update
If you're asked whether you want to install update select Y (yes). A few seconds to minutes later your server will be up-to-date.
We'll need to add some software on our server in anticipation of the next steps:
apt install php7.4-cli composer certbot python3-certbot-dns-route53 nginx php7.4-zip php7.4-sqlite3 unzip supervisor
Install wildcard TLS certificate
At this point we can use Let's Encrypt to install a free of charge TLS certificate on our server. The certificate must cover both the base domain name (example.com
) which will be used as our Expose server's endpoint and all of its subdomains (*.example.com
) which will be dynamically generated by Expose when we're sharing a site.
Let's Encrypt provides a dead simple tool called certbot
which automates the process. It only needs a way to verify that we are in control of the domain name we request a certificate for. Since our server doesn't do email and doesn't have a web server on it we need to use the DNS verification method.
Since we are using Amazon Route 53 as the DNS host for this server we can automate the DNS verification by telling certbot
to make the necessary changes in Amazon Route 53 directly. You need to run the following command, putting your own Amazon Access and Secret key instead of Your_Amazon_Access_Key
and Your_Amazon_Secret_Key
respectively.
AWS_ACCESS_KEY_ID=Your_Amazon_Access_Key \
AWS_SECRET_ACCESS_KEY=Your_Amazon_Secret_Key \
certbot certonly \
-d 'example.com,*.example.com' \
--dns-route53
Follow the instructions on the screen to enter your email address, accept the Let's Encrypt license agreement and get your certificate issued. Here's sample output of that command to get a better idea of what to expect:
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Found credentials in environment variables.
Plugins selected: Authenticator dns-route53, Installer None
Enter email address (used for urgent renewal and security notices) (Enter 'c' to
cancel): This email address is being protected from spambots. You need JavaScript enabled to view it.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Please read the Terms of Service at
https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf. You must
agree in order to register with the ACME server at
https://acme-v02.api.letsencrypt.org/directory
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(A)gree/(C)ancel: A
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Would you be willing to share your email address with the Electronic Frontier
Foundation, a founding partner of the Let's Encrypt project and the non-profit
organization that develops Certbot? We'd like to send you email about our work
encrypting the web, EFF news, campaigns, and ways to support digital freedom.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: Y
Obtaining a new certificate
Performing the following challenges:
dns-01 challenge for example.com
Waiting for verification...
Cleaning up challenges
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/example.com/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/example.com/privkey.pem
Your cert will expire on 2020-09-29. To obtain a new or tweaked
version of this certificate in the future, simply run certbot
again. To non-interactively renew *all* of your certificates, run
"certbot renew"
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
- If you like Certbot, please consider supporting our work by:
Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le
Set up automatic renewal of the certificate
Let's Encrypt TLS certificates have a hardcoded expiration date of 90 days. This means that you'd need to remember to go through the same process every 90 days – just short of 3 months. This is annoying and I can guarantee that you'll forget or get to busy to do it in time. Instead, we will automate the certificate renewal with a CRON job.
There's a caveat, though. CRON jobs can only be expressed in terms of months and days of the month. Since 90 days is 2-4 days short of a full three months we can't express that in a CRON job. The solution is to set up a CRON job that runs on the first day of every second month which is between 59 and 62 days. This is OK. Renewing a certificate early doesn't hurt.
So, let's edit the CRON jobs
crontab -e
Use nano
if prompted (unless you know how to use another editor). Add a new line:
0 0 1 */2 * AWS_ACCESS_KEY_ID=Your_Amazon_Access_Key AWS_SECRET_ACCESS_KEY=Your_Amazon_Secret_Key certbot renew
It's VERY STRONGLY recommended that you use the Amazon Access and Secret Key for an IAM user which only allowed to manage TXT records on this particular domain instead of your root Access and Secret Keys.
Press CTRL-X, then Y, then ENTER to save and exit.
Create an unprivileged user for Expose
We don't want to run Expose with full root privileges. That'd be unsafe. Instead, we're going to create an unprivileged user on our server.
adduser --disabled-password expose
When asked for full name, room number etc just press ENTER.
The user we added has password authentication disabled. This means there are only two ways to log in as that user:
- SSH'ing into the server as root and then run
su -l - expose
. - Setting up certificate authentication to log in via SSH directly.
We are going to only ever use the former.
Install and set up the Expose server
In the same terminal where we're logged in as root we can type:
su -l - expose
composer global require beyondcode/expose
logout
Now we need to make sure our Expose server will be automatically started and, should it crash, automatically restart. We're going to use Supervisor for that. For this, we need to create a Supervisor configuration file for Expose:
nano /etc/supervisor/conf.d/expose.conf
Type in the following contents
[program:expose]
command=/home/expose/.config/composer/vendor/bin/expose serve example.com
environment=HOME="/home/expose"
directory=/home/expose/.expose
numprocs=1
autostart=true
autorestart=true
user=expose
Press CTRL-X, then Y, then ENTER to save and exit.
Now we'll let Supervisor know about our changes.
supervisorctl update
supervisorctl restart expose
Next up, we need to configure the Expose server
su -l expose
~/.config/composer/vendor/bin/expose publish
nano ~/.expose/config.php
As you can see, Expose's configuration file is a simple .php
file that returns an array and it has plenty of comments to help us understand what everything is supposed to do. We need to change the following:
host
to your domain, e.g.example.com
port
to 8080validate_auth_tokens
totrue
users
to something you like. Note down the username and password you're setting up here, you'll need it later.
Press CTRL-X, then Y, then ENTER to save and exit.
Let's go back to being root and restart the Expose server.
logout
supervisorctl restart expose
Set up TLS proxying
As I said before, Expose itself doesn't do TLS (HTTPS) but you'll most likely want to access your sites shared with Expose under HTTPS. This can be done by using NginX as our TLS terminator. The idea is that NginX handles the TLS connection and then acts as a proxy between your browser and the Expose server.
For this, we'll remove the default NginX server configuration block installed by Ubuntu and create our own configuration file.
rm /etc/nginx/sites-enabled/default
ln -s /etc/nginx/sites-available/expose /etc/nginx/sites-enabled/expose
nano /etc/nginx/sites-available/expose
Type in the following:
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
server_name _;
# Start the SSL configurations
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_read_timeout 60;
proxy_connect_timeout 60;
proxy_redirect off;
# Allow the use of websockets
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_read_timeout 60;
proxy_connect_timeout 60;
proxy_redirect off;
# Allow the use of websockets
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Press CTRL-X, then Y, then ENTER to save and exit.
We'll now restart NginX for the changes to "take".
systemctl restart nginx
At this point we should have an Expose server which is proxied through NginX. Let's make sure it's working.
From a browser access https://expose.example.com
. It will ask you to authenticate. Remember the username and password you entered in the config.php
file? That's what you need to enter here. You should then see the Expose server's shared sites page.
Note. If the Users page times out you are missing the SQLite 3 PHP module (in Ubuntu 20.04 this is installed through the
php7.4-sqlite3
package). This will also prevent Expose from working. If you have this problem you did NOT follow the instructions in this article. So go back, install the required packages and restart Expose.
We're now on the final stretch! Go to the Users page and create a new user. Copy the generated Auth-Token. This is what allows you to use Expose to share a local site.
Setup the local Expose client
Assuming you have Composer set up locally and its vendor/bin
directory is in your PATH:
composer global require beyondcode/expose
expose publish
nano ~/.expose/config.php
Hey, that file looks familiar! Yes, Expose is using the same configuration file for both the server and the local application. We need to make a few changes here.
host
to your domain, e.g. example.comport
to 443. Yes, the server part was using 8080 BUT we set up NginX as a TLS proxy which listens to port 443. That's why we're using port 443 here. Being an HTTPS port it means that your authentication token is always sent encrypted over the Internet to the Expose server. Yay, security!auth_token
to the Auth-Token you got from the server when you created your user.
Believe it or not, we're now ready!
Share a local site
Sharing a local site requires you to open a terminal window and type something to the tune of:
expose share http://localhost/mysite
where http://localhost/mysite
is the URL to your locally hosted site.
After a second or so you will see:
Thank you for using expose.
Local-URL: localhost
Dashboard-URL: http://127.0.0.1:4040
Expose-URL: https://9spt2t56t7.example.com
Visiting https://9spt2t56t7.example.com
from a browser will display your local site. This works over the Internet, without the browser connecting directly to your local server. You can try it with a smartphone connected to the Internet over cellular data (WiFi turned off).
Speaking of which, did you notice the Dashboard-URL? Connect to it using a browser on your local computer. It displays a big QR code you can scan on your smartphone to visit the Expose-URL. No typing necessary. Neat!
Well. Almost.
If you try sharing a Joomla! site you'll see that you're immediately redirected back to your local URL (e.g. http://localhost/mysite
). Did we just waste our time?
The problem lies in how Joomla and Expose work. Joomla uses the domain name disclosed to it by the web server it runs under. Expose is doing some tunneling to expose our local server under a different domain name. As far as Joomla is concerned, you are trying to access the site from a non-canonical URL so it dutifully redirects you to the canonical URL. Which is useless for our purpose.
One way to solve it is editing your configuration.php
and setting the $live_site
to the Expose-URL given to use by the server. This works, most of the times, but it's a subpar user experience.
Instead of doing that, I wrote a nifty plugin. Install it on your site, move it to the top of the System plugins list and publish it. It will magically make your Joomla site believe its canonical URL is the Expose subdomain.
As for WordPress... Oh, dear. The site's URL is hardcoded in the database and elsewhere. Even though you might be tempted to use a plugin that replaces the local site's domain with the share Expose subdomain it's both more impractical than changing Joomla's $live_site
in configuration.php
and has a lot of limitations which would make sharing with Expose fail miserably. I am not insane enough to even try it. Please don't ask me.
So, is it worth it?
I know that I've written this a lot in pretty much all of my technical articles but... it depends!
If you just want to show a client a preview of the site you're building for them it's probably easier using a dev server and something like Akeeba Backup to transfer a snapshot of the site there. If you, however, want to make changes during the call with the client and have them magically appear in front of their eyes it's probably easier sharing your site with them over Expose.
If you want to do test features which depend on HTTPS with a valid, signed certificate – such as W3C Web Authentication – or you want to develop / test features which require an integration with a third party service requiring a live server accessible over the Internet (payments processing, social media integrations and single sign-on easily come to mind) then yeah, it's worth it. That's my use case for Expose. Before Expose I would have to mess with my router's configuration which was complicated, error-prone and ran the risk of exposing my entire local development server on the Internet instead of a single site I can kill access to with a keystroke.
Another incidental use case for me is working out issues with sites that can be served from multiple domain names / URLs. Also, due to the high latency inherent to using solutions like Expose and Ngrok, I get to test how high-latency connections affect my software – something I could only partially simulate with Charles Proxy and the browsers' developer tools. For whatever reason real, random latency works very differently than the simulated kind.
As they say, your mileage may vary.
Credits: article photo by Gustavo Fring on Pexels.