https://media.notthebee.com/articles/62714720058cc62714720058cd.jpg
"Callin’ Baton Rouge" may very well be Garth Brooks’s best song. And let me tell you something, his fans understand that very well:
Not the Bee
Just another WordPress site
https://media.notthebee.com/articles/62714720058cc62714720058cd.jpg
"Callin’ Baton Rouge" may very well be Garth Brooks’s best song. And let me tell you something, his fans understand that very well:
Not the Bee
https://assets.amuniversal.com/d60fd5f0a47b013aa5e3005056a9545d
Thank you for voting.
Hmm. Something went wrong. We will take a look as soon as we can.
Dilbert Daily Strip
In the sixth part of this tutorial series on developing PHP on Docker we will setup git-secret
to store secrets directly in the repository. Everything will be handled through Docker and
added as make targets for a convenient workflow.
FYI:
This tutorial is a precursor to the next a part
Create a CI pipeline for dockerized PHP Apps
because dealing with secrets is an important aspect when setting up a CI system (and later when
deploying to production) – but I feel it’s complex enough to warrant its own article.
All code samples are publicly available in my
Docker PHP Tutorial repository on github.
You find the branch with the final result of this tutorial at
part-6-git-secret-encrypt-repository-docker.
If you want to follow along, please subscribe to the RSS feed
or via email
to get automatic notifications when the next part comes out 🙂
Dealing with secrets (passwords, tokens, key files, etc.) is close to “naming things”
when it comes to hard problems in software engineering. Some things to consider:
In fact, entire products have been build around dealing with secrets, e.g.
HashiCorp Vault,
AWS Secrets Manager or the
GCP Secret Manager. Introducing those in a project comes
with a certain overhead as it’s yet another service that needs to be integrated and
maintained. Maybe it is the exactly right decision for your use-case – maybe it’s overkill.
By the end of this article you’ll at least be aware of an alternative with a lower barrier to entry.
See also the Pros and cons section in the end for an overview.
Even though it’s
generally not advised to store secrets in a repository,
I’ll propose exactly that in this tutorial:
.gitignoregit-secretIn the end, we will be able to call
make secret-decrypt
to reveal secrets in the codebase, make modifications to them if necessary and then run
make secret-encrypt
to encrypt them again so that they can be committed (and pushed to the remote repository). To
see it in action, check out branch
part-6-git-secret-encrypt-repository-docker
and run the following commands:
# checkout the branch
git checkout part-6-git-secret-encrypt-repository-docker
# build and start the docker setup
make make-init
make docker-build
make docker-up
# "create" the secret key - the file "secret.gpg.example" would usually NOT live in the repo!
cp secret.gpg.example secret.gpg
# initialize gpg
make gpg-init
# ensure that the decrypted secret file does not exist
ls passwords.txt
# decrypt the secret file
make secret-decrypt
# show the content of the secret file
cat passwords.txt
We will set up gpg and git-secret in the php base image, so that the tools become available in
all other containers. Please refer to
Docker from scratch for PHP 8.1 Applications in 2022
for an in-depth explanation of the docker images.
Please note, that there is a caveat when using git-secret in a folder that is shared between
the host system and a docker container. I’ll explain that in more detail (including a workaround)
in section
The git-secret directory and the gpg-agent socket.
gpg is short for The GNU Privacy Guard and is an open source implementation
of the OpenPGP standard. In short, it allows us to create a personal key file pair
(similar to SSH keys) with a private secret key and a public
key that can be shared with other parties whose messages you want to decrypt.
To install it, we can simply run apk add gnupg and thus update
.docker/images/php/base/Dockerfile accordingly
# File: .docker/images/php/base/Dockerfile
RUN apk add --update --no-cache \
bash \
gnupg \
make \
#...
I’ll only cover the strictly necessary gpg commands here. Please refer to
the “Using GPG” section in the git-secret docu
and/or How to generate PGP keys with GPG
for further information.
We need gpg to create the gpg key pair via
name="Pascal Landau"
email="[email protected]"
gpg --batch --gen-key <<EOF
Key-Type: 1
Key-Length: 2048
Subkey-Type: 1
Subkey-Length: 2048
Name-Real: $name
Name-Email: $email
Expire-Date: 0
%no-protection
EOF
The %no-protection will create a key without password, see
also this gist to “Creating gpg keys non-interactively”.
Output:
$ name="Pascal Landau"
$ email="[email protected]"
$ gpg --batch --gen-key <<EOF
> Key-Type: 1
> Key-Length: 2048
> Subkey-Type: 1
> Subkey-Length: 2048
> Name-Real: $name
> Name-Email: $email
> Expire-Date: 0
> %no-protection
> EOF
gpg: key E1E734E00B611C26 marked as ultimately trusted
gpg: revocation certificate stored as '/root/.gnupg/opengpg-revocs.d/74082D81525723F5BF5B2099E1E734E00B611C26.rev'
You could also run gpg --gen-key without the --batch flag to be guided interactively through the
process.
The private key can be exported via
email="[email protected]"
path="secret.gpg"
gpg --output "$path" --armor --export-secret-key "$email"
This secret key must never be shared!
It looks like this:
-----BEGIN PGP PRIVATE KEY BLOCK-----
lQOYBF7VVBwBCADo9un+SySu/InHSkPDpFVKuZXg/s4BbZmqFtYjvUUSoRAeSejv
G21nwttQGut+F+GdpDJL6W4pmLS31Kxpt6LCAxhID+PRYiJQ4k3inJfeUx7Ws339
XDPO3Rys+CmnZchcEgnbOfQlEqo51DMj6mRF2Ra/6svh7lqhrixGx1BaKn6VlHkC
...
ncIcHxNZt7eK644nWDn7j52HsRi+wcWsZ9mjkUgZLtyMPJNB5qlKQ18QgVdEAhuZ
xT3SieoBPd+tZikhu3BqyIifmLnxOJOjOIhbQrgFiblvzU1iOUOTOcSIB+7A
=YmRm
-----END PGP PRIVATE KEY BLOCK-----
All secret keys can be listed via
gpg --list-secret-keys
Output:
$ gpg --list-secret-keys
/root/.gnupg/pubring.kbx
------------------------
sec rsa2048 2022-03-27 [SCEA]
74082D81525723F5BF5B2099E1E734E00B611C26
uid [ultimate] Pascal Landau <[email protected]>
ssb rsa2048 2022-03-27 [SEA]
You can import the private key via
path="secret.gpg"
gpg --import "$path"
and get the following output:
$ path="secret.gpg"
$ gpg --import "$path"
gpg: key E1E734E00B611C26: "Pascal Landau <[email protected]>" not changed
gpg: key E1E734E00B611C26: secret key imported
gpg: Total number processed: 1
gpg: unchanged: 1
gpg: secret keys read: 1
gpg: secret keys unchanged: 1
Caution: If the secret key requires a password, you would now be prompted for it. We can
circumvent the prompt by using --batch --yes --pinentry-mode loopback:
path="secret.gpg"
gpg --import --batch --yes --pinentry-mode loopback "$path"
See also Using Command-Line Passphrase Input for GPG.
In doing so, we don’t need to provide the password just yet – but we must pass it later when we
attempt to decrypt files.
The public key can be exported to public.gpg via
email="[email protected]"
path="public.gpg"
gpg --armor --export "$email" > "$path"
It looks like this:
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQENBF7VVBwBCADo9un+SySu/InHSkPDpFVKuZXg/s4BbZmqFtYjvUUSoRAeSejv
G21nwttQGut+F+GdpDJL6W4pmLS31Kxpt6LCAxhID+PRYiJQ4k3inJfeUx7Ws339
...
3LLbK7Qxz0cV12K7B+n2ei466QAYXo03a7WlsPWn0JTFCsHoCOphjaVsncIcHxNZ
t7eK644nWDn7j52HsRi+wcWsZ9mjkUgZLtyMPJNB5qlKQ18QgVdEAhuZxT3SieoB
Pd+tZikhu3BqyIifmLnxOJOjOIhbQrgFiblvzU1iOUOTOcSIB+7A
=g0hF
-----END PGP PUBLIC KEY BLOCK-----
List all public keys via
gpg --list-keys
Output:
$ gpg --list-keys
/root/.gnupg/pubring.kbx
------------------------
pub rsa2048 2022-03-27 [SCEA]
74082D81525723F5BF5B2099E1E734E00B611C26
uid [ultimate] Pascal Landau <[email protected]>
sub rsa2048 2022-03-27 [SEA]
The public key can be imported in the same way as private keys via
path="public.gpg"
gpg --import "$path"
Example:
$ gpg --import /var/www/app/public.gpg
gpg: key E1E734E00B611C26: "Pascal Landau <[email protected]>" not changed
gpg: Total number processed: 1
gpg: unchanged: 1
The official website of git-secret is already doing a great job of
introducing the tool. In short, it allows us to declare certain files as “secrets” and encrypt
them via gpg – using the keys of all trusted parties. The encrypted file can then by stored
safely directly in the git repository and decrypted if required.
In this tutorial I’m using git-secret v0.4.0
$ git secret --version
0.4.0
The installation instructions for Alpine read as
follows:
sh -c "echo 'https://gitsecret.jfrog.io/artifactory/git-secret-apk/all/main'" >> /etc/apk/repositories
wget -O /etc/apk/keys/git-secret-apk.rsa.pub 'https://gitsecret.jfrog.io/artifactory/api/security/keypair/public/repositories/git-secret-apk'
apk add --update --no-cache git-secret
We update the .docker/images/php/base/Dockerfile accordingly:
# File: .docker/images/php/base/Dockerfile
# install git-secret
# @see https://git-secret.io/installation#alpine
ADD https://gitsecret.jfrog.io/artifactory/api/security/keypair/public/repositories/git-secret-apk /etc/apk/keys/git-secret-apk.rsa.pub
RUN echo "https://gitsecret.jfrog.io/artifactory/git-secret-apk/all/main" >> /etc/apk/repositories && \
apk add --update --no-cache \
bash \
git-secret \
gnupg \
make \
#...
git-secret is initialized via the following command run in the root of the git repository
git secret init
$ git secret init
git-secret: init created: '/var/www/app/.gitsecret/'
We only need to do this once, because we’ll commit the folder to git later. It contains the
following files:
$ git status | grep ".gitsecret"
new file: .gitsecret/keys/pubring.kbx
new file: .gitsecret/keys/pubring.kbx~
new file: .gitsecret/keys/trustdb.gpg
new file: .gitsecret/paths/mapping.cfg
The pubring.kbx~ file (with the trailing tilde ~) is only a temporary file and can safely be
git-ignored. See also
Can’t find any docs about keyring.kbx~ file.
git-secret directory and the gpg-agent socketTo use git-secret in a directory that is shared between the host system and docker, we need to
also run the following commands:
tee .gitsecret/keys/S.gpg-agent <<EOF
%Assuan%
socket=/tmp/S.gpg-agent
EOF
tee .gitsecret/keys/S.gpg-agent.ssh <<EOF
%Assuan%
socket=/tmp/S.gpg-agent.ssh
EOF
tee .gitsecret/keys/gpg-agent.conf <<EOF
extra-socket /tmp/S.gpg-agent.extra
browser-socket /tmp/S.gpg-agent.browser
EOF
This is necessary because there is an issue when git-secret is used in a setup where the
codebase is shared between the host system and a docker container.
I’ve explained the details in the Github issue
“gpg: can’t connect to the agent: IPC connect call failed” error in docker alpine on shared volume.
In short:
gpg uses a gpg-agent to perform its tasks and the two tools communicate through sockets--home-directory of the gpg-agentgpg command used by git-secret, using the.gitsecret/keys directories as a --home-directory--home-directory is shared with the host system, the socketThe corresponding error messages are
gpg: can't connect to the agent: IPC connect call failed
gpg-agent: error binding socket to '/var/www/app/.gitsecret/keys/S.gpg-agent': I/O error
The workaround for this problem can be found in
this thread: Configure gpg to use different
locations for the sockets by
placing additional gpg configuration files in the .gitsecret/keys directory:
S.gpg-agent
%Assuan%
socket=/tmp/S.gpg-agent
S.gpg-agent.ssh
%Assuan%
socket=/tmp/S.gpg-agent.ssh
gpg-agent.conf
extra-socket /tmp/S.gpg-agent.extra
browser-socket /tmp/S.gpg-agent.browser
To add a new user, you must first import its public gpg key. Then
run:
email="[email protected]"
git secret tell "$email"
In this case, the user [email protected] will now be able to decrypt the secrets.
To show the users run
git secret whoknows
$ git secret whoknows
[email protected]
To remove a user, run
email="[email protected]"
git secret killperson "$email"
FYI: This command was renamed to removeperson in git-secret >= 0.5.0
$ git secret killperson [email protected]
git-secret: removed keys.
git-secret: now [[email protected]] do not have an access to the repository.
git-secret: make sure to hide the existing secrets again.
User [email protected] will no longer be able to decrypt the secrets.
Caution: The secrets need to be re-encrypted after removing a user!
Please be aware that not only your secrets are stored in git, but who had access as well. I.e.
even if you remove a user and re-encrypt the secrets, that user would still be able to decrypt
the secrets of a previous commit (when the user was still added). In consequence, you need
to rotate the encrypted secrets themselves as well after removing a user.
But isn’t that a great flaw in the system, making it a bad idea to use git-secret in general?
In my opinion: No.
If the removed user had access to the secrets at any point in time (no
matter where they have been stored), he could very well have just created a local copy or simply
“written them down”. In terms of security there is really no “added downside” due to git-secret.
It just makes it very clear that you must rotate the secrets ¯\_(ツ)_/¯
See also this
lengthy discussion on git-secret on Hacker News.
Run git secret add [filenames...] for files you want to encrypt. Example:
git secret add .env
If .env is not added in .gitignore, git-secret will display a warning and add it
automatically.
git-secret: these files are not in .gitignore: .env
git-secret: auto adding them to .env
git-secret: 1 item(s) added.
Otherwise, the file is added with no warning.
$ git secret add .env
git-secret: 1 item(s) added.
You only need to add files once. They are then stored in .gitsecret/paths/mapping.cfg:
$ cat .gitsecret/paths/mapping.cfg
.env:505070fc20233cb426eac6a3414399d0f466710c993198b1088e897fdfbbb2d5
You can also show the added files via
git secret list
$ git secret list
.env
Caution: The files are not yet encrypted!
If you want to remove a file from being encrypted, run
git secret remove .env
Output
$ git secret remove .env
git-secret: removed from index.
git-secret: ensure that files: [.env] are now not ignored.
To actually encrypt the files, run:
git secret hide
Output:
$ git secret hide
git-secret: done. 1 of 1 files are hidden.
The encrypted (binary) file is stored at $filename.secret, i.e. .env.secret in this case:
$ cat .env.secret
�☺♀♥�H~�B�Ӯ☺�"��▼♂F�►���l�Cs��S�@MHWs��e������{♣♫↕↓�L� ↕s�1�J$◄♥�;���dž֕�Za�����\u�ٲ& ¶��V�► ���6��
;<�d:��}ҨD%.�;��&��G����vWW�]>���߶��▲;D�+Rs�S→�Y!&J��۪8���ٔF��→f����*��$♠���&RC�8▼♂�☻z h��Z0M�T>
The encrypted files are de-cryptable for all users that have been added via git secret tell.
That also means that you need to run this command again whenever a new user is added.
You can decrypt files via
git secret reveal
Output:
$ git secret reveal
File '/var/www/app/.env' exists. Overwrite? (y/N) y
git-secret: done. 1 of 1 files are revealed.
-f option to force the overwrite and run non-interactivelygit secret cat $filename (e.g. git secret cat .env)In case the secret gpg key is password protected, you must pass the password
via the -p option. E.g. for password 123456
git secret reveal -p 123456
One problem that comes with encrypted files: You can’t review them during a code review in a
remote tool. So in order to understand what changes have been made, it is helpful to
show the changes between the encrypted and the decrypted files. This can be done via
git secret changes
Output:
$ echo "foo" >> .env
$ git secret changes
git-secret: changes in /var/www/app/.env:
--- /dev/fd/63
+++ /var/www/app/.env
@@ -34,3 +34,4 @@
MAIL_ENCRYPTION=null
MAIL_FROM_ADDRESS=null
MAIL_FROM_NAME="${APP_NAME}"
+foo
Note the +foo at the bottom of the output. It was added in the first line via
echo "foo"> >> .env.
Since I won’t be able to remember all the commands for git-secret and gpg, I’ve added them to
the Makefile at .make/01-00-application-setup.mk:
# File: .make/01-00-application-setup.mk
#...
# gpg
DEFAULT_SECRET_GPG_KEY?=secret.gpg
DEFAULT_PUBLIC_GPG_KEYS?=.dev/gpg-keys/*
.PHONY: gpg
gpg: ## Run gpg commands. Specify the command e.g. via ARGS="--list-keys"
$(EXECUTE_IN_APPLICATION_CONTAINER) gpg $(ARGS)
.PHONY: gpg-export-public-key
gpg-export-public-key: ## Export a gpg public key e.g. via EMAIL="[email protected]" PATH=".dev/gpg-keys/john-public.gpg"
@$(if $(PATH),,$(error PATH is undefined))
@$(if $(EMAIL),,$(error EMAIL is undefined))
"$(MAKE)" -s gpg ARGS="gpg --armor --export $(EMAIL) > $(PATH)"
.PHONY: gpg-export-private-key
gpg-export-private-key: ## Export a gpg private key e.g. via EMAIL="[email protected]" PATH="secret.gpg"
@$(if $(PATH),,$(error PATH is undefined))
@$(if $(EMAIL),,$(error EMAIL is undefined))
"$(MAKE)" -s gpg ARGS="--output $(PATH) --armor --export-secret-key $(EMAIL)"
.PHONY: gpg-import
gpg-import: ## Import a gpg key file e.g. via GPG_KEY_FILES="/path/to/file /path/to/file2"
@$(if $(GPG_KEY_FILES),,$(error GPG_KEY_FILES is undefined))
"$(MAKE)" -s gpg ARGS="--import --batch --yes --pinentry-mode loopback $(GPG_KEY_FILES)"
.PHONY: gpg-import-default-secret-key
gpg-import-default-secret-key: ## Import the default secret key
"$(MAKE)" -s gpg-import GPG_KEY_FILES="$(DEFAULT_SECRET_GPG_KEY)"
.PHONY: gpg-import-default-public-keys
gpg-import-default-public-keys: ## Import the default public keys
"$(MAKE)" -s gpg-import GPG_KEY_FILES="$(DEFAULT_PUBLIC_GPG_KEYS)"
.PHONY: gpg-init
gpg-init: gpg-import-default-secret-key gpg-import-default-public-keys ## Initialize gpg in the container, i.e. import all public and private keys
# git-secret
.PHONY: git-secret
git-secret: ## Run git-secret commands. Specify the command e.g. via ARGS="hide"
$(EXECUTE_IN_APPLICATION_CONTAINER) git-secret $(ARGS)
.PHONY: secret-init
secret-init: ## Initialize git-secret in the repository via `git-secret init`
"$(MAKE)" -s git-secret ARGS="init"
.PHONY: secret-init-gpg-socket-config
secret-init-gpg-socket-config: ## Initialize the config files to change the gpg socket locations
echo "%Assuan%" > .gitsecret/keys/S.gpg-agent
echo "socket=/tmp/S.gpg-agent" >> .gitsecret/keys/S.gpg-agent
echo "%Assuan%" > .gitsecret/keys/S.gpg-agent.ssh
echo "socket=/tmp/S.gpg-agent.ssh" >> .gitsecret/keys/S.gpg-agent.ssh
echo "extra-socket /tmp/S.gpg-agent.extra" > .gitsecret/keys/gpg-agent.conf
echo "browser-socket /tmp/S.gpg-agent.browser" >> .gitsecret/keys/gpg-agent.conf
.PHONY: secret-encrypt
secret-encrypt: ## Decrypt secret files via `git-secret hide`
"$(MAKE)" -s git-secret ARGS="hide"
.PHONY: secret-decrypt
secret-decrypt: ## Decrypt secret files via `git-secret reveal -f`
"$(MAKE)" -s git-secret ARGS="reveal -f"
.PHONY: secret-decrypt-with-password
secret-decrypt-with-password: ## Decrypt secret files using a password for gpg via `git-secret reveal -f -p $(GPG_PASSWORD)`
@$(if $(GPG_PASSWORD),,$(error GPG_PASSWORD is undefined))
"$(MAKE)" -s git-secret ARGS="reveal -f -p $(GPG_PASSWORD)"
.PHONY: secret-add
secret-add: ## Add a file to git secret via `git-secret add $FILE`
@$(if $(FILE),,$(error FILE is undefined))
"$(MAKE)" -s git-secret ARGS="add $(FILE)"
.PHONY: secret-cat
secret-cat: ## Show the contents of file to git secret via `git-secret cat $FILE`
@$(if $(FILE),,$(error FILE is undefined))
"$(MAKE)" -s git-secret ARGS="cat $(FILE)"
.PHONY: secret-list
secret-list: ## List all files added to git secret `git-secret list`
"$(MAKE)" -s git-secret ARGS="list"
.PHONY: secret-remove
secret-remove: ## Remove a file from git secret via `git-secret remove $FILE`
@$(if $(FILE),,$(error FILE is undefined))
"$(MAKE)" -s git-secret ARGS="remove $(FILE)"
.PHONY: secret-add-user
secret-add-user: ## Remove a user from git secret via `git-secret tell $EMAIL`
@$(if $(EMAIL),,$(error EMAIL is undefined))
"$(MAKE)" -s git-secret ARGS="tell $(EMAIL)"
.PHONY: secret-show-users
secret-show-users: ## Show all users that have access to git secret via `git-secret whoknows`
"$(MAKE)" -s git-secret ARGS="whoknows"
.PHONY: secret-remove-user
secret-remove-user: ## Remove a user from git secret via `git-secret killperson $EMAIL`
@$(if $(EMAIL),,$(error EMAIL is undefined))
"$(MAKE)" -s git-secret ARGS="killperson $(EMAIL)"
.PHONY: secret-diff
secret-diff: ## Show the diff between the content of encrypted and decrypted files via `git-secret changes`
"$(MAKE)" -s git-secret ARGS="changes"
Working with git-secret is pretty straight forward:
git-secret.gitignoreBut: The devil is in the details. The Process challenges section explains
some of the pitfalls that we have encountered and the Scenarios section gives some
concrete examples for common scenarios.
From a process perspective we’ve encountered some challenges that I’d like to mention – including
how we deal with them.
When updating secrets you must ensure to always decrypt the files first in order to avoid
using “stale” files that you might still have locally. I usually check out the latest main
branch and run git secret reveal to have the most up-to-date versions of the secret files. You
could also use a post-merge git hook to do
this automatically, but I personally don’t want to risk overwriting my local secret files by
accident.
Since the encrypted files cannot be diffed meaningfully, the code reviews become more difficult
when secrets are involved. We use Gitlab for reviews and I usually first check the diff of
the .gitsecret/paths/mapping.cfg file to see “which files have changed” directly in the UI.
In addition, I will
main branchgit secret reveal -ffeature-branchgit secret changes to see the differences between the decrypted files from main and thefeature-branchThings get even more complicated when multiple team members need to modify secret files at the same
time on different branches, as the encrypted files cannot be compared – i.e. git cannot be smart
about delta updates.
The only way around this is coordinating the pull requests, i.e. merge the first, update the
secrets of the second and then merge the second.
Fortunately, this has only happened very rarely so far.
git-secret and gpg setupCurrently, all developers in our team have git-secret installed locally (instead of using it
through docker) and use their own gpg keys.
This means more onboarding overhead, because
git-secret locally (*)gpg locally (*)gpg key pairgit secret tellAnd for offboarding
git secret killpersonPlus, we need to ensure that the git-secret and gpg versions are kept up-to-date for everyone to
not run into any compatibility issues.
As an alternative, I’m currently leaning more towards handling everything through docker (as
presented in this tutorial). All steps marked with (*) are then obsolete, i.e. there is no need
to setup git-secret and gpg locally.
But the approach also comes with some downsides, because
gpg key “in the codebase” (ignored by .gitignore) so itgpg (in docker). The alternative would be usingTo make this a little more convenient, we put the public gpg keys of every dev in the
repository under .dev/gpg-keys/ and the private key has to be named secret.gpg and put
in the root of the codebase.
In this setup, secret.gpg must also be added to the.gitignore file.
# File: .gitignore
#...
vendor/
secret.gpg
The import can now be be simplified with make targets:
# gpg
DEFAULT_SECRET_GPG_KEY?=secret.gpg
DEFAULT_PUBLIC_GPG_KEYS?=.dev/gpg-keys/*
.PHONY: gpg
gpg: ## Run gpg commands. Specify the command e.g. via ARGS="--list-keys"
$(EXECUTE_IN_APPLICATION_CONTAINER) gpg $(ARGS)
.PHONY: gpg-import
gpg-import: ## Import a gpg key file e.g. via GPG_KEY_FILES="/path/to/file /path/to/file2"
@$(if $(GPG_KEY_FILES),,$(error GPG_KEY_FILES is undefined))
"$(MAKE)" -s gpg ARGS="--import --batch --yes --pinentry-mode loopback $(GPG_KEY_FILES)"
.PHONY: gpg-import-default-secret-key
gpg-import-default-secret-key: ## Import the default secret key
"$(MAKE)" -s gpg-import GPG_KEY_FILES="$(DEFAULT_SECRET_GPG_KEY)"
.PHONY: gpg-import-default-public-keys
gpg-import-default-public-keys: ## Import the default public keys
"$(MAKE)" -s gpg-import GPG_KEY_FILES="$(DEFAULT_PUBLIC_GPG_KEYS)"
.PHONY: gpg-init
gpg-init: gpg-import-default-secret-key gpg-import-default-public-keys ## Initialize gpg in the container, i.e. import all public and private keys
“Everything” can now be handled via
make gpg-init
that needs to be run one single time after a container has been started.
The scenarios assume the following preconditions:
git checkout part-6-git-secret-encrypt-repository-docker
and no running docker containers
make docker-down
git-secret folder, the keys in .dev/gpg-keys, thesecret.gpg key and the passwords.* files
rm -rf .gitsecret/ .dev/gpg-keys/* secret.gpg passwords.*
gpg keysUnfortunately, I didn’t find a way to create and export gpg keys through make and docker. You
need to either run the commands interactively OR pass a string with newlines to it. Both things are
horribly complicated with make and docker. Thus, you need to log into the application
container and run the commands in there directly. Not great – but this needs to be done only
once when a new developer is onboarded anyways.
FYI: I usually log into containers via
Easy container access via din .bashrc helper.
The secret key is exported to secret.gpg and the public key to .dev/gpg-keys/alice-public.gpg.
# start the docker setup
make docker-up
# log into the container ('winpty' is only required on Windows)
winpty docker exec -ti dofroscra_local-application-1 bash
# export key pair
name="Alice Doe"
email="[email protected]"
gpg --batch --gen-key <<EOF
Key-Type: 1
Key-Length: 2048
Subkey-Type: 1
Subkey-Length: 2048
Name-Real: $name
Name-Email: $email
Expire-Date: 0
%no-protection
EOF
# export the private key
gpg --output secret.gpg --armor --export-secret-key $email
# export the public key
gpg --armor --export $email > .dev/gpg-keys/alice-public.gpg
$ make docker-up
ENV=local TAG=latest DOCKER_REGISTRY=docker.io DOCKER_NAMESPACE=dofroscra APP_USER_NAME=application APP_GROUP_NAME=application docker compose -p dofroscra_local --env-file ./.docker/.env -f ./.docker/docker-compose/docker-compose.yml -f ./.docker/docker-compose/docker-compose.local.yml up -d
Container dofroscra_local-application-1 Created
...
Container dofroscra_local-application-1 Started
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
...
95f740607586 dofroscra/application-local:latest "/usr/sbin/sshd -D" 21 minutes ago Up 21 minutes 0.0.0.0:2222->22/tcp dofroscra_local-application-1
$ winpty docker exec -ti dofroscra_local-application-1 bash
root:/var/www/app# name="Alice Doe"
root:/var/www/app# email="[email protected]"
gpg --batch --gen-key <<EOF
Key-Type: 1
Key-Length: 2048
Subkey-Type: 1
Subkey-Length: 2048
Name-Real: $name
Name-Email: $email
Expire-Date: 0
%no-protection
EOF
root:/var/www/app# gpg --batch --gen-key <<EOF
> Key-Type: 1
> Key-Length: 2048
> Subkey-Type: 1
> Subkey-Length: 2048
> Name-Real: $name
> Name-Email: $email
> Expire-Date: 0
> %no-protection
> EOF
gpg: directory '/root/.gnupg' created
gpg: keybox '/root/.gnupg/pubring.kbx' created
gpg: /root/.gnupg/trustdb.gpg: trustdb created
gpg: key BBBE654440E720C1 marked as ultimately trusted
gpg: directory '/root/.gnupg/openpgp-revocs.d' created
gpg: revocation certificate stored as '/root/.gnupg/openpgp-revocs.d/225C736E0E70AC222C072B70BBBE654440E720C1.rev'
root:/var/www/app# gpg --output secret.gpg --armor --export-secret-key $email
root:/var/www/app# head secret.gpg
-----BEGIN PGP PRIVATE KEY BLOCK-----
lQOYBGJD+bwBCADBGKySV5PINc5MmQB3PNvCG7Oa1VMBO8XJdivIOSw7ykv55PRP
3g3R+ERd1Ss5gd5KAxLc1tt6PHGSPTypUJjCng2plwD8Jy5A/cC6o2x8yubOslLa
x1EC9fpcxUYUNXZavtEr+ylOaTaRz6qwSabsAgkg2NZ0ey/QKmFOZvhL8NlK9lTI
GgZPTiqPCsr7hiNg0WRbT5h8nTmfpl/DdTgwfPsDn5Hn0TEMa79WsrPnnq16jsq0
Uusuw3tOmdSdYnT8j7m1cpgcSj0hRF1eh4GVE0o62GqeLTWW9mfpcuv7n6mWaCB8
DCH6H238gwUriq/aboegcuBktlvSY21q/MIXABEBAAEAB/wK/M2buX+vavRgDRgR
hjUrsJTXO3VGLYcIetYXRhLmHLxBriKtcBa8OxLKKL5AFEuNourOBdcmTPiEwuxH
5s39IQOTrK6B1UmUqXvFLasXghorv8o8KGRL4ABM4Bgn6o+KBAVLVIwvVIhQ4rlf
root:/var/www/app# gpg --armor --export $email > .dev/gpg-keys/alice-public.gpg
root:/var/www/app# head .dev/gpg-keys/alice-public.gpg
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQENBGJD+bwBCADBGKySV5PINc5MmQB3PNvCG7Oa1VMBO8XJdivIOSw7ykv55PRP
3g3R+ERd1Ss5gd5KAxLc1tt6PHGSPTypUJjCng2plwD8Jy5A/cC6o2x8yubOslLa
x1EC9fpcxUYUNXZavtEr+ylOaTaRz6qwSabsAgkg2NZ0ey/QKmFOZvhL8NlK9lTI
GgZPTiqPCsr7hiNg0WRbT5h8nTmfpl/DdTgwfPsDn5Hn0TEMa79WsrPnnq16jsq0
Uusuw3tOmdSdYnT8j7m1cpgcSj0hRF1eh4GVE0o62GqeLTWW9mfpcuv7n6mWaCB8
DCH6H238gwUriq/aboegcuBktlvSY21q/MIXABEBAAG0HUFsaWNlIERvZSA8YWxp
Y2VAZXhhbXBsZS5jb20+iQFOBBMBCgA4FiEEIlxzbg5wrCIsBytwu75lREDnIMEF
AmJD+bwCGy8FCwkIBwIGFQoJCAsCBBYCAwECHgECF4AACgkQu75lREDnIMEN4Af+
That’s it. We now have a new secret and private key for [email protected] and have exported it to
secret.gpg resp. .dev/gpg-keys/alice-public.gpg (and thus shared it with the host system).
The remaining commands can now be run outside of the application container directly on the
host system.
git-secretLet’s say we want to introduce git-secret “from scratch” to a new codebase. Then you would run
the following commands:
Initialize git-secret
make secret-init
$ make secret-init
"C:/Program Files/Git/mingw64/bin/make" -s git-secret ARGS="init";
git-secret: init created: '/var/www/app/.gitsecret/'
Apply the gpg fix for shared directories
See The git-secret directory and the gpg-agent socket.
$ make secret-init-gpg-socket-config
$ make secret-init-gpg-socket-config
echo "%Assuan%" > .gitsecret/keys/S.gpg-agent
echo "socket=/tmp/S.gpg-agent" >> .gitsecret/keys/S.gpg-agent
echo "%Assuan%" > .gitsecret/keys/S.gpg-agent.ssh
echo "socket=/tmp/S.gpg-agent.ssh" >> .gitsecret/keys/S.gpg-agent.ssh
echo "extra-socket /tmp/S.gpg-agent.extra" > .gitsecret/keys/gpg-agent.conf
echo "browser-socket /tmp/S.gpg-agent.browser" >> .gitsecret/keys/gpg-agent.conf
gpg after container startupAfter restarting the containers, we need to initialize gpg, i.e. import all public keys from
.dev/gpg-keys/* and the private key from secret.gpg. Otherwise we will not be able to en-
and decrypt the files.
make gpg-init
$ make gpg-init
"C:/Program Files/Git/mingw64/bin/make" -s gpg-import GPG_KEY_FILES="secret.gpg"
gpg: directory '/home/application/.gnupg' created
gpg: keybox '/home/application/.gnupg/pubring.kbx' created
gpg: /home/application/.gnupg/trustdb.gpg: trustdb created
gpg: key BBBE654440E720C1: public key "Alice Doe <[email protected]>" imported
gpg: key BBBE654440E720C1: secret key imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: secret keys read: 1
gpg: secret keys imported: 1
"C:/Program Files/Git/mingw64/bin/make" -s gpg-import GPG_KEY_FILES=".dev/gpg-keys/*"
gpg: key BBBE654440E720C1: "Alice Doe <[email protected]>" not changed
gpg: Total number processed: 1
gpg: unchanged: 1
Let’s start by adding our own user to git-secret
make secret-add-user EMAIL="[email protected]"
$ make secret-add-user EMAIL="[email protected]"
"C:/Program Files/Git/mingw64/bin/make" -s git-secret ARGS="tell [email protected]"
git-secret: done. [email protected] added as user(s) who know the secret.
And verify that it worked via
make secret-show-users
$ make secret-show-users
"C:/Program Files/Git/mingw64/bin/make" -s git-secret ARGS="whoknows"
[email protected]
Let’s add a new encrypted file secret_password.txt.
Create the file
echo "my_new_secret_password" > secret_password.txt
Add it to .gitignore
echo "secret_password.txt" >> .gitignore
Add it to git-secret
make secret-add FILE="secret_password.txt"
$ make secret-add FILE="secret_password.txt"
"C:/Program Files/Git/mingw64/bin/make" -s git-secret ARGS="add secret_password.txt"
git-secret: 1 item(s) added.
Encrypt all files
make secret-encrypt
$ make secret-encrypt
"C:/Program Files/Git/mingw64/bin/make" -s git-secret ARGS="hide"
git-secret: done. 1 of 1 files are hidden.
$ ls secret_password.txt.secret
secret_password.txt.secret
Let’s first remove the “plain” secret_password.txt file
rm secret_password.txt
$ rm secret_password.txt
$ ls secret_password.txt
ls: cannot access 'secret_password.txt': No such file or directory
and then decrypt the encrypted one.
make secret-decrypt
$ make secret-decrypt
"C:/Program Files/Git/mingw64/bin/make" -s git-secret ARGS="reveal -f"
git-secret: done. 1 of 1 files are revealed.
$ cat secret_password.txt
my_new_secret_password
Caution: If the secret gpg key is password protected (e.g. 123456), run
make secret-decrypt-with-password GPG_PASSWORD=123456
You could also add the GPG_PASSWORD variable to the
.make/.env
file as a local default value so that you wouldn’t have to specify the value every time and
could then simply run
make secret-decrypt-with-password
without passing GPG_PASSWORD
Remove the secret_password.txt file we added previously:
make secret-remove FILE="secret_password.txt"
$ make secret-remove FILE="secret_password.txt"
"C:/Program Files/Git/mingw64/bin/make" -s git-secret ARGS="remove secret_password.txt"
git-secret: removed from index.
git-secret: ensure that files: [secret_password.txt] are now not ignored.
Caution: this will neither remove the secret_password.txt file nor
the secret_password.txt.secret file automatically”
$ ls -l | grep secret_password.txt
-rw-r--r-- 1 Pascal 197121 19 Mar 31 14:03 secret_password.txt
-rw-r--r-- 1 Pascal 197121 358 Mar 31 14:02 secret_password.txt.secret
But even though the encrypted secret_password.txt.secret file still exists, it will not be
decrypted:
$ make secret-decrypt
"C:/Program Files/Git/mingw64/bin/make" -s git-secret ARGS="reveal -f"
git-secret: done. 0 of 0 files are revealed.
Removing a team member can be done via
make secret-remove-user EMAIL="[email protected]"
$ make secret-remove-user EMAIL="[email protected]"
"C:/Program Files/Git/mingw64/bin/make" -s git-secret ARGS="killperson [email protected]"
git-secret: removed keys.
git-secret: now [[email protected]] do not have an access to the repository.
git-secret: make sure to hide the existing secrets again.
If there are any users left, we must make sure to re-encrypt the secrets via
make secret-encrypt
Otherwise (if no more users are left) git-secret would simply error out
$ make secret-decrypt
"C:/Program Files/Git/mingw64/bin/make" -s git-secret ARGS="reveal -f"
git-secret: abort: no public keys for users found. run 'git secret tell [email protected]'.
make[1]: *** [.make/01-00-application-setup.mk:57: git-secret] Error 1
make: *** [.make/01-00-application-setup.mk:69: secret-decrypt] Error 2
Caution: Please keep in mind to
rotate the secrets themselves as well!
git pull is the only thing you need to get everything (=> good dev experience)./secret.gpgCongratulations, you made it! If some things are not completely clear by now, don’t hesitate to
leave a comment. You are now able to encrypt and decrypt secret files so that they can be stored
directly in the git repository.
In the next part of this tutorial, we will
set up a CI pipeline for dockerized PHP Apps on Github and Gitlab
that decrypts all necessary secrets and then runs our tests and qa tools.
Please subscribe to the RSS feed or via email to get automatic
notifications when this next part comes out 🙂
Since you ended up on this blog, chances are pretty high that you’re into Software Development
(probably PHP, Laravel, Docker or Google Big Query) and I’m a big fan of feedback and networking.
So – if you’d like to stay in touch, feel free to shoot me an email with a couple of words about yourself and/or
connect with me on
LinkedIn or
Twitter
or simply subscribe to my RSS feed
or go the crazy route and subscribe via mail
and don’t forget to leave a comment 🙂
Laravel News Links
https://i.ytimg.com/vi/ZRdgVuIppYQ/maxresdefault.jpgIn this lesson, we go over what an active record pattern is & how Laravel implements it in its ORM package called Eloquent. This lesson also covers the basics of eloquent to get you familiar with it & show you the differences between the data mapper & active record patterns.Laravel News Links
https://kbouzidi.com/img/containers/assets/laravel-saas-app-permission-cashier-subscription.jpeg/ffa5ee46ec4fd568502fee9dc65bab3d.jpeg
Most SaaS applications have plans that users can subscribe to, such as “Standard Plan” and “Premium Plan” and those plans can be on a yearly or monthly base. The idea is that when a user subscribes to a plan we give them the permission to access our restricted content or service so they can use it.
Let’s say we have two plans in our application “Standard Plan” and “Premium Plan” then we will make two roles, one for standard customers and another for premium customers.
When our user buys a subscription, we give him that role so he can access the features associated with it.
composer require laravel/breeze --dev
php artisan breeze:install
npm install && npm run dev
php artisan migrate
composer require spatie/laravel-permission
Let’s add these to our $routeMiddleware array inside app/Http/Kernel.php
protected $routeMiddleware = [
// ...
'role' => \Spatie\Permission\Middlewares\RoleMiddleware::class,
'permission' => \Spatie\Permission\Middlewares\PermissionMiddleware::class,
'role_or_permission' => \Spatie\Permission\Middlewares\RoleOrPermissionMiddleware::class,
];
use Spatie\Permission\Traits\HasRoles;
class User extends Authenticatable
{
use HasRoles;
// ...
}
For more details check laravel-permission docs.
Users have roles and roles have permissions so first, let’s start with creating permissions.
There are two features “tasks” and “events”.
php artisan make:seeder PermissionSeeder
$permissions = [
'list tasks',
'edit tasks',
'create tasks',
'delete tasks',
'list events',
'edit events',
'create events',
'delete events',
];
foreach ($permissions as $permission) {
Permission::create(['name' => $permission]);
}
There are two roles: standard-user and premium-user.
php artisan make:seeder UserSeeder
// create "standard-user" Role
$standardUserRole = Role::create(['name' => 'standard-user']);
$standardPlanPermissions = array([
'list tasks',
'edit tasks',
'create tasks',
'delete tasks',
]);
// assign permissions to "standard-user" role
$standardUserRole->syncPermissions($standardPlanPermissions);
// create standard user
$standardPlanUser = User::create([
'name' => 'Standard Plan User', 'email' => 'standardplan@kbouzidi.com',
'password' => bcrypt('123456')
]);
// assign "standard-user" to the standard user
$standardPlanUser->assignRole([$standardUserRole->id])
$premiumUserRole = Role::create(['name' => 'premium-user']);
// premium-user has more more features
$premiumPlanPermissions = array([
...$standardPlanPermissions,
'list events',
'edit events',
'create events',
'delete events',
]);
$premiumUserRole->syncPermissions($premiumPlanPermissions);
$premiumPlanUser = User::create([
'name' => 'Premium Plan User', 'email' => 'premiumplan@kbouzidi.com',
'password' => bcrypt('123456')
]);
$premiumPlanUser->assignRole([$premiumUserRole->id]);
Route::get('/dashboard', function() {
return view('dashboard', ['intent' => auth()->user()->createSetupIntent()]);
})->middleware(['auth', 'isSubscribed'])->name('dashboard');
Route::post('/subscribe', [SubscriptionController::class, 'subscribe'])
->middleware(['auth'])
->name('subscribe');
Route::name('subscribed.')
->middleware(['auth', 'role:standard-user|premium-user'])
->group(function() {
Route::view('subscribed/dashboard', 'subscribed.dashboard')
->name('dashboard');
});
This is a custom middleware to check if a user is subscribed then we will redirect him to his section.
php artisan make:middleware RedirectIfSubscribed
Inside the handle method we will add this code :
if ($request->user() &&
($request->user()->subscribed('standard') ||
$request->user()->subscribed('premium'))) {
return to_route('subscribed.dashboard');
}
Register the middleware in app\Http\kernel.php likewise :
protected $routeMiddleware = [
//
'isSubscribed' => RedirectIfSubscribed::class,
];
in AuthenticatedSessionController in the store method add
$request->authenticate();
$request->session()->regenerate();
// add this
if ($request->user()->hasRole('standard-user') ||
$request->user()->hasRole('premium-user')) {
return redirect()->intended(route('subscribed.dashboard'));
}
return redirect()->intended(RouteServiceProvider::HOME);

composer require laravel/cashier
php artisan migrate
use Spatie\Permission\Traits\HasRoles;
use Laravel\Cashier\Billable;
class User extends Authenticatable
{
use Billable,HasRoles
}
STRIPE_KEY=your-stripe-key
STRIPE_SECRET=your-stripe-secret
First, create some products, you can do that from the stripe dashboard, check this guide (link).
make sure to choose recurring prices.
You should have two plans, premium and standard each plan has two recurring prices yearly ad monthly.
Now we can set up a controller to accept user subscriptions.
php artisan make:controller SubscriptionController
// you can move this to a database table
private $plans = array(
'standard_monthly' => 'price_1KpyUHEpWs7pwp46NqoIW3dr',
'standard_annually' => 'price_1KpyUHEpWs7pwp46bvRJH9lM',
'premium_monthly' => 'price_1KpyYdEpWs7pwp46q31BU6vT',
'premium_annually' => 'price_1KpyYdEpWs7pwp46iGRz3829',
);
public function subscribe(Request $request) {
// this is a demo make sure to add some validation logic
$user = auth()->user();
$planeName =
in_array($request->planId, ['standard_monthly', 'standard_annually']) ?
'standard' :
'premium';
// check if the user already have subscribed to the plan
if ($user->subscribed($planeName)) {
return response()->json(
['message' => 'You have already subscribed to this plan!'], 403);
}
// get plan priceId
$planPriceId = $this->plans[$request->planId];
// It does what it says :p
$user->createOrGetStripeCustomer();
try {
// subscribe user to plan
$subscription = $user->newSubscription($planeName, $planPriceId)
->create($request->paymentMethodId);
if ($subscription->name == 'standard') {
$user->assignRole('standard-user');
} else {
$user->assignRole('premium-user');
}
return response()->json(
['message' => 'Subscription was successfully completed!'], 200);
} catch (IncompletePayment $exception) {
return response()->json(['message' => 'Opps! Something went wrong.'], 400);
}
}
I did use this Tailwindcss snippet Template with a bit of AlpineJs magic 🪄 we got this.

<script src="https://js.stripe.com/v3/"></script>
<script>
const stripe = Stripe('{ {env("STRIPE_KEY")} }');
const elements = stripe.elements();
const cardElement = elements.create('card');
const cardButton = document.getElementById('card-button');
const clientSecret = cardButton.dataset.secret;
const cardHolderName = document.getElementById('card-holder-name');
cardElement.mount('#card-element');
</script>
<input id="card-holder-name" class="..."
type="text" name="card_holder" placeholder="Card Holder" />
<div class="..." id="card-element"></div>
Route::get('/dashboard', function () {
return view('dashboard',[
'intent' => auth()->user()->createSetupIntent()
]);
})->middleware(['auth','isSubscribed'])->name('dashboard');
Here we are passing it as a button attribute “data-secret”
<x-button x-text="processing ? 'Processing...' : 'Subscribe'" @click="subscribe"
class="mt-4" id="card-button" data-secret="">
Subscribe
</x-button>
When the button is clicked we will call the subscribe method which will use the stripe SDK to call the confirmCardSetup method with the clientSecret as an argument so we can check the card information without they hit our server 🔒.
Stripe will then return a setupIntent if the card is valid, then we will be able to access the user payment_method id that we will send to our back-end to charge the customer.
async subscribe() {
this.processing = true
const {setupIntent, error} = await stripe.confirmCardSetup(clientSecret, {
payment_method:
{card: cardElement, billing_details: {name: cardHolderName.value}}
});
if (error) {
this.errorMessage = error.message
return;
}
let response = axios.post('', {
'paymentMethodId': setupIntent.payment_method,
'planId': this.selectedPlanId,
'_token': '',
});
response.then(response => {
this.successMessage = response.data.message
location.reload()
})
response.catch(({response}) => {this.errorMessage = response.data.message})
response.finally(() => this.processing = false)
}
After the post request to the subscribe route, we will trigger location.reload() to redirect the user to the appropriate section with the help of the isSubscribed middleware.

We have two features, standard users can manage tasks and premium users can manage tasks and events.
php artisan make:model Task -crmf
php artisan make:model Event -crmf
ℹ️ : f will generate a model factory
I used factories to seed data and I made a simple API CRUD for tasks and events nothing fancy you can check the code and my GitHub repo.
Route::name('subscribed.')
->middleware(['auth', 'role:standard-user|premium-user'])
->group(function() {
Route::view('subscribed/dashboard', 'subscribed.dashboard')
->name('dashboard');
Route::resource('tasks', TaskController::class)->middleware([
'permission:list tasks|edit tasks|create tasks|delete tasks'
]);
Route::resource('events', EventController::class)->middleware([
'permission:list events|edit events|create events|delete events'
]);
});
We are protecting these features with permissions check using laravel-permission middleware.
We’ll just talk about how to list tasks and events, you can add more features.
@can('list tasks')
<div x-data ="{
tasks: [],
async init() {
this.tasks = await (await fetch('/tasks')).json()
}
}" class='basis-1/2''
<ul>
<template x-for='task in tasks' :key='task.id'>
<li x-text='task.name'></li>
</template>
</ul>
</div>
@endcan
@can('list events')
<div x-data ="{
events: [],
async init() {
this.events = await (await fetch('/events')).json()
}
}" class='basis-1/2''
<ul>
<template x-for='event in events' :key='event.id'>
<li x-text='event.name'></li>
</template>
</ul>
</div>
@endcan
Now you can check if the user has that permission or not you can also use policies to have more control: like limiting standard users to create a certain number of tasks like 3 or 5 or whatever you got the idea 😉.
Youpi 🎉🥳 now have your own saas app.
Before you go to LinkedIn and start writing CEO / Mister Big Boss / Ninja …
make sure to listen to this Podcast from Jeffrey way first 10 Business Tips When Launching Your First App.
The demo project will be on my GitHub Safemood.
I’m willing to make a demo project for every article so subscribe to my newsletter for more 🚀.
If you have a question or even a tip for me, you can find me on Twitter or LinkedIn.
Laravel News Links
https://www.howtoforge.com/images/featured/aws-mysql-replica.pngAmazon RDS is an easy-to-set up AWS-managed database service. In this guide, we will see how to create a read replica of a MySql RDS database instance.Planet MySQL
https://www.hibit.dev/images/social/2022/preview/laravel_ddd.png
Modern web frameworks teach you to take one group of related concepts and split it across multiple places throughout your codebase. Laravel is a robust framework with a big community behind it. Usually it’s standard structure is enough for most starting projects.
Building scalable applications, instead, requires a different approach. Have you ever heard from a client to work on controllers or review the models folder? Probably never – they ask you to work on invoicing, clients management or users. These concept groups are called domains.
Let’s make a practical exercise applying Domain Driven Design. Our goal is to create a boilerplate to be used universally as base of any Laravel project. Take advantage of the framework power at the same time we meet complex business requirements.
Understanding of Domain Driven Design and some basic concepts:
We are going to use a fresh Laravel 9 installation for this guide, take a look on how to create your first Laravel project. To run Laravel locally a PHP setup is also required.
We must keep in mind some important points planning the architecture of our software:
There are several ways in which the Laravel framework can be organized to serve as a template for large-scale projects. We will focus on the app (aka src) folder while keeping the framework features almost intact .
Initially, Laravel is structure looks as below:

With modified codebase structure, we are able to follow Domain Driven Design within our Laravel project which will support the future growth of our software. We also will be ready for the upcoming framework upgrades. We want it to be easy to upgrade to the next versions.
In first place, we should create a folder for each DDD layer:
Since this layer is where abstractions are made, the design of interfaces are included in the domain layer. It will also contain aggregates, value objects (VOs), data transfer objects (DTOs), domain events, entities, models, etc…
The only exception would be anything related to eloquent models. Eloquent makes very easy to interact with databases, tables and rows but the reality is that it’s not a DDD model. It’s an ambiguous definition of the concept of model with implementation of database connection. Does it mean that we can not use Eloquent? Yes we can, it can be used as repository implementation (infrastructure layer). We do have a significant advantage with this approach: we are no longer dependent on Laravel’s method names and we can use some naming that reflects the language of the domain.
Actually we have nothing in domain layer so we will keep it empty.

Application layer provides the required base to use and manipulate the domain in a user-friendly way. It is where business process flows are handled, commands are executed and reactions to domain events are coded.
Actually we have nothing in application layer so we will keep it empty.

Infrastructure layer is responsible for communication with external websites, access to services such as database (persistence), messaging systems and email services.
We are going to treat Laravel as a third-party service for our application. So all the framework files are going to be grouped inside the infrastructure folder.
What does it imply:
Note: make sure to update namespaces when moving files.
The final result look as following:

User interface layer is the part where interaction with external world happens. The responsible of displaying information to the user and accepting new data. It could be implemented for web, console or any presentation technology.
Actually we have nothing in user interface layer so we will keep it empty.

One last thing that our architecture is lacking: the connection of concrete implementations with interfaces within our domain, e.g. repository interfaces.
For each module on the domain layer, we need a matching module in the infrastructure layer which takes responsibility for what the domain layer cannot afford to care about.
We recommend using EventServiceProvider.php to make these bindings:

Here you can define the abstract interface and the concrete implementation. It will be kind of class wiring configuration.
As a small bonus, we’ve included shared domain VOs for basic types.

That classes provide an abstraction and shared methods for the final VO definition. An example of usage:
<?php namespace App\Domain\Post\ValueObject;
use App\Domain\Shared\ValueObject\StringValueObject;
class Message extends StringValueObject
{
}
Note: constructor, getters and additional shared methods can be included in the parent StringValueObject.
Note that so far nothing has changed in the way we use Laravel. We still have our Kernels, Providers, Exception Handlers, Rules, Mails and more inside the app folder.
Implementing Domain-Driven Design is always going to be a challenge no matter what framework we use, there is no unique way of defining things. Almost everything depends on the specific project you’re working on and it probably makes sense to apply a different structure or architecture in other cases.
Domain Drive Design is a continuous process that must be carried out according to specific needs that can be adapted over time. Also it’s a trade off: investing time on having a perfect structure or creating a starting base and improving with the time.
Official GitHub: https://github.com/hibit-dev/laravel9-ddd
Laravel News Links
http://img.youtube.com/vi/gg8gjO5pLps/0.jpgI have created a video to explain Laravel ecosystem items on the website with good visualizations.
It was becoming so long, so I decided to publish the first part for now.Laravel News Links
https://photos5.appleinsider.com/gallery/48125-93981-000-lead-Repair-Manuals-xl.jpg
Whether you’re planning to fix your iPhone screen, or you’re just curious to see what the new Self Service Repair program entails, you can now download Apple’s instructions to get all of the details.
Apple has launched its promised Self Service Repair program for iPhones, and if nothing else, it’s going to tell people just how involved repairing these devices is. In practice, it’s unlikely that many regular consumers will go through the process of repairing their devices.
But even if they don’t, it’s now possible for everyone to see what they’re paying for when they take an iPhone in to be fixed. It’s fascinating how detailed Apple’s instructions are, right down to when you cannot re-use a screw you’ve just taken out of an iPhone.
So whether it’s for actual, practical need because you’re going to do this, or it’s for a quite incredible look inside how finely engineered iPhones are, Apple has two new sets of documentation for you.
Both can be read online, but they are in PDF form so they can also be downloaded from the same link. In Safari, hover your cursor over the bottom middle of the page on screen, and controls including a download button appear.
Apple runs this new service, and it is promoting this ahead of any possible future legislation that requires manufacturers to provide a Right to Repair service. But it’s also distancing itself from the process.
So there’s no big banner headline on Apple’s official site about how you can save on repairs this way. Apple’s also running the whole operation through a new company.
In keeping with that slight distancing, the first documentation of the two that Apple has released spends much time telling you to use Apple Stores to get your repairs done.
“We believe customers should have access to safe and reliable service and repairs that do not compromise their security, their privacy, or the functionality of their device,” says Apple in its new “Expanding Access to Service and Repairs for Apple Devices” document.
“We also know that a repair is more likely to be done correctly when it’s performed by skilled, trained professionals,” it continues, “using genuine Apple parts engineered for quality and safety, and tools designed for the repair.”
Then it does undermine some of this by trying to make it sound impressive that every Apple repair technician has had “more than a dozen hours” of training.
Nonetheless, this manual is a wide-ranging guide to what Apple is doing, and how it’s hoping the service will be used. For a deeper, more specifically focused look, there’s the actual self repair service manual.
The direct and store links both take you to the same list of all Apple manuals, whether for repair or not. Currently there are 130 listed, and they range from the Mac Studio Quick Start Guide, to the iPhone 13 Pro Repair Manual.
At present, there are nine such repair manuals, all for the iPhones that are included in the Self Service Repair program:
Each is broken down into sections starting a basic overview of the iPhone in question, followed by one about safety during repairs. Finally there are the procedures for conducting repairs, ranging from changing the battery or replacing a screen, to fixing cameras and the Taptic Engine.
Once you get into these procedures, you see detailed step-by-step instructions for the repair. And each step is accompanied by an annotated photo illustration.
Every step is illustrated, and there are very many warnings along the way
With around 80 pages per repair manual, a lot of the steps are the same or very similar across the different models. So if you are just curious to see what a repair entails, you could really read any of them.
Whereas, naturally, if you’re going to do such a repair, you need to find precisely the right manual, and study it.
“Read the entire manual first,” says Apple in the introduction to every repair manual. “If you’re not comfortable performing the repairs as instructed in this manual, don’t proceed.”
AppleInsider News
https://www.percona.com/blog/wp-content/uploads/2022/04/Working-With-Large-PostgreSQL-Databases-200×105.png
It’s a funny thing when the topic of database sizes comes up. Calling one small, medium, large, or even huge isn’t as straightforward as you’d think. Distinguishing the size of a database is based upon a number of factors whose characteristics can be classified as either “tangible”, things that you can measure in an objective manner, or “intangible”, those attributes best expressed using the catch-all phrase “it depends”. For example, a 2TB database is, to many people, a large database. On the other hand, a veteran DBA could describe a PostgreSQL database cluster as large when it enters the realm of Peta-Bytes.
Here’s a recap of some of PostgreSQL’s basic capabilities:
|
database size |
unlimited |
|
number of databases |
4,294,950,911 |
|
relations per database |
1,431,650,303 |
|
table size |
32TB |
|
rows per table, defined by the number |
4,294,967,295 pages |
|
field per table |
1,600 |
|
field size |
1GB |
|
identifier length |
63 bytes |
|
indexes per table |
unlimited |
|
columns per index |
32 |
|
partition keys |
32 |
NB: Despite possible physical constraints one faces when creating large numbers of schema, there is no theoretical limitation to the number created in postgres.
I’ve come to differentiate a small database from a large one using the following caveats. And while it is true that some of the caveats for a large database can be applied to a small one, and vice-versa, the fact of the matter is that most of the setups out there in the wild follow these observations:
Large databases often bring up the following questions and issues:
The key difference between a small vs large database is how they are administered:
Good planning is your friend: addressing potential issues for a large database by anticipating future conditions is the goal i.e. testing the entire infrastructure before it goes into production.
Scripting your build environment using tools such as Ansible, Puppet, Terraform, etc. mitigates human error when provisioning the underlying infrastructure. It’s important to be able to build in a consistent and repeatable manner.
Once a database is in production it must be monitored and wired with alerts for the various critical thresholds. Aside from the standard errors, consider configuring your monitoring solution to follow the “Rule Of Three”. Select and watch only three metrics that track and alert for a specific “change of state”. This is not to be confused with following a particular issue, rather it is meant to inform you that you should pay attention to your system in order to understand that something has changed from what is considered normal behavior. Depending on your preferences you may want to watch for known production issues or when the system is stable you might be more interested in trending alerts such as query performance which have slowed below a predefined threshold.
In regards to system tuning: while small databases can, after a fashion, perform in a satisfactory manner using the default values large databases cannot. Configuring initial tuning parameters such as the shared_buffers etc is de rigueur but you should also monitor the environment in order to trend issues such as for example bloat and long-term query performance. Remember, the most common problem experienced by an otherwise stable and well-thought-out architecture is table and index bloat. Addressing bloat by tuning the autovacuum characteristics is essential.
Monitoring, especially before and after maintenance windows, is required because they can catch potential problems to the update before becoming production issues.
Pay close attention to following the regular maintenance activities during the life-cycle of your system:
Maintenance activities such as logical backups and PostgreSQL minor upgrades are performed at regular intervals.
Plan for space utilization requirements of logical dumps and WAL archives.
In regards to logical backups: it can be difficult to justify backing up an entire database when it can take a week. Alternatively, differential backups are a potential solution. Backing up tables that are updated and deleted regularly can be archived at a faster frequency than the slower changing tables which can be stored without changes for a longer period of time. This approach however requires the appropriate architectural design considerations such as using table partitioning.
An alternative to logical backups is to consider Copy On Write (COW), or stacked file systems, such as ZFS and BTRFS. Environments within containers for example can leverage snapshots and clones allowing for near-instant recoveries in a disaster recovery scenario.
Complex operations, such as hardware and database scaling, encompass many sub-activities and can often involve working with several teams at the same time. In this case, maintaining reference documentation is critical. Activities such as these are best tracked and planned in a Kanban, or Scrum, environment.
In regards to Disaster Recovery (DR) consider automating the following operations:
As an aside to PITR: instead of rebuilding an entire data cluster from scratch to a particular point in time, one can instead create a STANDBY host that is replicated on a delay and can be recovered to a particular point in time or promoted in its current state. Refer to run-time parameter recovery_min_apply_delay for more information.
In conclusion, while small databases can be managed by administrating in an ad hoc manner, the administration of a large database must always be performed using a more rigorous and conscientious approach. And what you learn from administering a large database can be carried over to administering a small one.
REFERENCES:
Percona Database Performance Blog