We can leverage several command-line tools to safeguard the quality of the codebase even before changes are merged into the repository:
These tools are fantastic, but there's a catch: you and your teammates must remember to run them before pushing code to the repository. And, being humans, sometimes you might forget to do so.
That's why today we'll talk about Husky, a tool that automatically runs any number of commands whenever you commit or push. You'll never have to worry about forgetting to format, lint, or test before uploading code to the repo — Husky does it for you every time you run git commit
or git push
. Let's get started.
Suppose we are working on a Laravel project containing PHP and JavaScript code. Don't worry, these instructions will work even if you're not in this exact tech stack.
For PHP, let's use:
./vendor/bin/duster fix
php artisan test
And for JavaScript:
npm run format
npm run lint
npm run test
Husky is a modern native solution for managing your Git hooks, custom scripts you can define to be fired when certain Git events take place. Created by typicode (the developer behind json-server and jsonplaceholder) and counting over 10 million weekly downloads on npm, this tool allows you to automate command execution upon Git events like committing or pushing.
To start, install Husky as a development dependency:
npm install --save-dev husky
After installation, initialize Husky by running the following command:
npx husky init
Please note that, to run this command successfully, you'll need to first initialize your Git repository.
This simple command does two essential things:
prepare
script to your package.json
..husky
folder in your project's root directory containing a pre-commit
file.This pre-commit
file is a bash script that will run before every commit; by default, it contains just one line:
npm test
We can edit this file to include additional commands. Let's adapt it to run the commands we need for our specific project. Modify the pre-commit
file as follows:
npm run formatnpm run lintnpm run test./vendor/bin/duster fixphp artisan test
Now, when you execute a commit using:
git add -A && git commit -m "My commit message"
Husky will automatically run the specified commands before allowing the commit to proceed.
For example, imagine we have an undefined method in a JavaScript file. Husky will execute the first command, npm run format
, without issues. It will then proceed to the second command, npm run lint
, where it will encounter the error:
/dev/laravel-app/resources/js/bootstrap.js 4:1 error 'translate' is not defined no-undef ✖ 1 problem (1 error, 0 warnings) husky - pre-commit script failed (code 1)
Husky will halt the execution of subsequent commands and cancel the commit, allowing us to rectify the issue before attempting to commit again. There's nothing like catching a bug before it flies!
Once you've completed your Husky setup, add the .husky
folder to your repository to ensure that the pre-commit
hook we created will run before anyone commits to the codebase.
You can create another hook by creating more files in the .husky
folder. The filename must be a valid Git hook name; for example, pre-push
.
This strategy is excellent, but you might have noted a downside: running the formatter and linter across the entire codebase may seem excessive when modifying only a few files. Let's address that.
With Duster, we can utilize the --dirty
flag, which instructs the tool to run linters or fixers only on files that have been staged but not yet committed.
In our example, the command would be:
./vendor/bin/duster fix --dirty
We can proceed to edit the .husky/pre-commit
file:
npm run formatnpm run lintnpm run test-./vendor/bin/duster fix+./vendor/bin/duster fix --dirtyphp artisan test
And we are done! Now, let's move to JavaScript.
lint-staged is a tool that enables us to perform checks selectively on just the files we've edited and staged for commit, resulting in a significantly quicker check run.
To use it, let's install it as a development dependency:
npm install --save-dev lint-staged
Add this script to package.json
:
{ "scripts": { // other scripts "lint-staged": "lint-staged" }}
Now, let's update the Husky pre-commit hook. Open .husky/pre-commit
and replace the multiple npm
commands with the one we just added:
-npm run format-npm run lint-npm run test+npm run lint-staged./vendor/bin/duster fix --dirtyphp artisan test
Finally, let's create a configuration file named .lintstagedrc.json
in the root of the project. This file specifies which commands to run based on file extensions. The precise set of commands may vary depending on your setup, but in our example, it will be:
{ "*.css": [ "prettier --write" ], "*.{js,vue}": [ "prettier --write", "eslint --ignore-path .gitignore --fix", "vitest related --run --environment=jsdom" ]}
We are instructing lint-staged
to perform the following tasks:
When CSS files are staged, then:
When JavaScript files are staged, then:
So, if we only have three files staged, lint-staged
will check just those three, skipping the rest of the codebase. This way, we save time by focusing only on what's new or changed.
$ git add -A && git commit -am "Add a cool feature" > demo-husky@0.0.0 lint-staged> lint-staged ✔ Preparing lint-staged...✔ Running tasks for staged files...✔ Applying modifications from tasks...✔ Cleaning up temporary files...[main b57de91] Test lint-staged 2 files changed, 4 insertions(+), 7 deletions(-)
We run the whole test suite whenever we stage JavaScript code, because our changes might have unexpected side effects on other parts of the app. However, if we only edited CSS files, we can safely assume it's okay to skip JavaScript testing.
And if we did not stage JavaScript or CSS files, none of these checks will run. For example, if we stage app/Models/User.php
, Husky will run Duster and Pest, but none of the commands included in lint-staged (Prettier, ESLint, and Vitest).
That's it! We have covered how to format, lint, and test our front-end and back-end code.
This setup will allow you and your team to comply with the project's code styling and quality standards and catch bugs before they get merged into the repo's main branches.
Yes, initially, it might be frustrating to have a commit halted because of an error. But the payoff greatly exceeds that initial discomfort. And here's a little secret: if you want to bypass Husky, add --no-verify
at the end of your commit command. But don't tell anyone I told you!
You can access the complete codebase for this article on this public GitHub repo.
I hope you can implement one or two tips from this guide. If you want to see more content like this, let us know. Until next time!
]]>Today, we'll delve into state management strategies in Vue and introduce Pinia, the intuitive store.
Since the days of Vue 2, we have used the data
option to define a method that returns an object containing all the reactive variables our component needs.
<template> <div>{{ user.name }}</div><template> <script>export default { data () { return { user: { name: 'John', age: 25 } } }}</script>
This component definition, known as the Options API, is still supported in Vue 3, which also introduced the Composition API. This new API offers methods such as reactive
and ref
to define reactive data. By leveraging the Composition API and script setup
we can rewrite the script portion of our single-file component like this:
<script setup>import { reactive } from 'vue'const user = reactive({ name: 'John', age: 25 })</script>
Now, what if we need to access the user data from multiple components? For example, displaying the user name in the navbar, their details on the profile page, their address during checkout, and so on.
Typically, a parent component can pass down this data to its children as props
. However, when the child component requiring the data is nested three or four layers deep, you may find yourself adding that prop to every component in the hierarchy whether they directly use the data or not. This practice, known as prop drilling, is generally discouraged as it can compromise the maintainability of the code.
Things get even more complex when you have to update a shared piece of data from multiple places. A child component cannot directly modify a prop. Instead, it must emit a custom event to notify about the change. The parent component can then listen to that custom event and update its data, which in turn travels down the prop chain. Yeah, it could be better.
Fortunately, Vue 3 offers a straightforward solution to this challenge: thanks to the Composition API, we are no longer restricted to using Vue's reactivity methods within the confines of Vue components. Now, we can use ref
and reactive
in any script and export these reactive variables to use them throughout our application.
We can refer to these modules of reactive state as stores. For instance, we can create a store/user.js
store:
import { reactive } from 'vue'const user = reactive({ name: 'John', age: 25 })export { user }
... and then import it into a Profile
component:
<script setup>import { user } from './stores/user.js'</script> <template> <h1>Hello, {{ user.name }}! Remember, you are {{ user.age }} years old.</h1></template>
... or a HappyBirthday
component:
<script setup>import { user } from './stores/user.js'const blowCandles = () => user.age++</script> <template> <button @click="blowCandles"> I am {{ user.name }} and it's my birthday! </button></template>
... or any other location where we need to reference the user.
Great, right? Now, we have a reactive single source of truth for our data, not limited to a specific component.
As you can see, any component can also alter that data and the changes will instantly reflect on all the components reading it. So, if John blows out the candles, his Profile
will read, 'Hello, John! Remember, you are 26 years old.'
Having a globally mutable state that any component can update is handy but might make things tricky to maintain. To keep things organized and clear, it's a good idea to define state-mutating logic on the store itself:
import { reactive } from 'vue'const user = reactive({ name: 'John', age: 25 })const blowCandles = () => user.age++export { user, blowCandles }
... and use it like this:
<script setup>import { user, blowCandles } from './stores/user.js'</script> <template> <button @click="blowCandles"> I am {{ user.name }} and it's my birthday! </button></template>
It functions just like before, but the code is now more organized.
You can have small stores (picture a store to manage a short list of user preferences) or more intricate ones, handling substantial amounts of data with various methods for reading and updating it (such as a store to manage products on an e-commerce site).
This simple state management pattern is convenient, but:
It doesn't work for Server-Side Rendered applications. The shared state exists in a JavaScript module root scope, so only one instance of the reactive object is created throughout the app's lifecycle. That's fine for Single Page Applications, where modules initialize fresh for each visit, but not for SSR, where modules initialize only once, when the server starts. This could potentially lead to data leaks and security issues. Imagine a user visiting your app... and getting the data of another user. While you can configure your SSR app to support these basic stores, the process can be cumbersome.
Even if your app doesn't use Server-Side Rendering, you might need a stronger solution as it grows. For instance, picture needing to add a method to all your stores, keep their state synced with Local Storage, or stream each change through Websockets. Creating a base store that all others extend from is possible, but it can get complex really fast.
To solve these issues and get a rich, developer-friendly experience, we can use Pinia, the official state management solution for Vue 3. Let's take a closer look!
Pinia not only provides out-of-the-box support for SSR but also comes with a range of other goodies, including Vue Devtools integration, Hot-module replacement, TypeScript support, and easy-to-install plugins that can handle the features we mentioned, like sync with LocalStorage.
Created by Eduardo, the developer behind Vue Router, it replaced Vuex as the official recommended state management solution for Vue 3.
To install Pinia in your project, you can run:
npm install pinia
Then create a Pinia instance and pass it to your Vue app:
import { createApp } from 'vue'import { createPinia } from 'pinia'import App from './App.vue' const pinia = createPinia()const app = createApp(App) app.use(pinia)app.mount('#app')
And it's done! You are now ready to create and use stores.
You can create a Pinia store with the defineStore
method. The first argument is the store's name (must be unique), and the second is an options object. Let's rewrite our stores/user.js
store in Pinia and, while we are at it, add a computed property to know whether the user has the right age to vote.
import { defineStore } from 'pinia' export const useUserStore = defineStore('user', { state: () => ({ name: 'John', age: 25 }), getters: { canVote: (state) => state.age >= 18, }, actions: { blowCandles () { this.age++ } }})
As you can see, a Pinia store has:
state
, the reactive datagetters
, computed properties based on that dataactions
, methods to interact with that dataThis initial glimpse at a Pinia store may bring to mind a Vue 2 component or a Vue 3 component using the Options API. This type of definition is called Option Stores.
If you prefer the Composition API, you'll be happy to know that Pinia offers the ability to define your store like a setup function. In Setup Stores, you can pass a function as the second argument and use ref
, computed
, and methods to define state
, getters
, and actions, respectively. The function must return an object with the variables you want to expose.
import { defineStore } from 'pinia'import { ref, computed } from 'vue' export const useUserStore = defineStore('user', () => { const name = ref('John') const age = ref(25) const canVote = computed(() => age.value >= 18) const blowCandles = () => age.value++ return { name, age, canVote, blowCandles }}
Using Setup Stores has some advantages, like the ability to define watchers in the store itself, use other composables and inject provided properties in our setup function.
Now that we have defined our Pinia store, we can easily import it to our components or composables. Let's update our example:
<script setup>import useUserStore from './stores/user.js' const user = useUserStore()</script> <template> <button @click="user.blowCandles"> I am {{ user.name }} and it's my birthday! </button></template>
Accessing the state and actions of the store through the user
object is super handy. However, if you want to destructure it to tidy up your code, there's a trick to remember. When destructuring reactive properties (those created with ref
, reactive
, or computed
) you'll need to use the storeToRefs
helper to ensure they maintain their reactivity. For example:
<script setup>import { storeToRefs } from 'pinia'import useUserStore from './stores/user.js' const user = useUserStore() // You need storeToRefs when destructuring// properties created with `ref` or `computed`const { name, age, canVote } = storeToRefs(user) // But you don't need it when destructuring methodsconst { blowCandles } = user</script>
Now, unless all your visitors are 25-year-old folks named John, you won't initialize your store with those default values. Let's address that and take the opportunity to showcase a store closer to what you might find in a real-world application:
// Let's use Ofetch to make AJAX requests// (https://npmjs.com/package/ofetch)import { ofetch } from 'ofetch'import { defineStore } from 'pinia' export const useUserStore = defineStore('user', () => { const data = ref() const token = ref() const isLoggedIn = computed(() => Boolean(token.value)) async function login ({ email, password }) { const payload = await ofetch('https://example.com/login', { method: 'POST', body: { email, password } }) data.value = payload.data token.value = payload.token } async function logout () { await ofetch('https://example.com/logout', { method: 'POST', headers: { Authorization: `Bearer ${token.value}` } }) data.value = null token.value = null } return { data, token, isLoggedIn, login, logout }})
As seen in this example, the store is an excellent place to encapsulate the logic of a segment of your application. Here, the login
action makes a request to an API, saving the user data and token in the store state. The logout
action interacts with the corresponding endpoint, clearing out the state. We return these two methods along with the state, data
and token
, and the isLoggedIn
getter.
Then, we can use the store in a component as follows:
<script setup>import useUserStore from './stores/user.js'const user = useUserStore() const form = reactive({ email: null, password: null })const error = ref(null) async function handleSubmit() { try { await user.login(form) } catch { error.value = true }}</script> <template> <div v-if="user.isLoggedIn"> Welcome back, {{ user.data.name }}! <button @click="user.logout">Logout</button> </div> <form v-else @submit.prevent="handleSubmit"> <input type="email" v-model="form.email"> <input type="password" v-model="form.password"> <span v-if="error">Error, please try again</span> <button>Login</button> </form></template>
This looks great! However, it's still a simple example. We can use Pinia not only to manage the user session but also to track the data they need access to, such as notifications, to-dos, products, and so on. Once the data is requested from the API, retaining it in the store while the user navigates through the app leads to fewer subsequent requests to the server, resulting in much faster applications.
import { ofetch } from 'ofetch'import { defineStore } from 'pinia'import { ref } from 'vue' export const usePostsStore = defineStore('posts', () => { let loaded = false const endpoint = 'https://jsonplaceholder.typicode.com/posts' const list = ref([]) async function get (params = {}) { if (loaded && !params.forceReload) return list.value = await ofetch(endpoint) loaded = true } async function add (body) { list.value.push(await ofetch(endpoint, { method: 'POST', body })) } async function remove (id) { await ofetch(`${endpoint}/${id}`, { method: 'DELETE' }) const index = list.value.findIndex(post => post.id === id) if (index >= 0) list.value.splice(index, 1) } return { list, get, add, remove }})
We began this article by discussing Vue 2, and now we've come full circle: Pinia is fully compatible with Vue 2. So, if you have a Vue 2 application using Vuex for store management, migrating those stores to Pinia could be the initial step in your upgrade to Vue 3.
Upgrading from Vue 2 to Vue 3 is a complex task, though, so if you feel you need assistance, don't hesitate to get in touch with us 😉
If you have the Vue Devtools extension installed in your browser (which is highly recommended for Vue app development) and you're using Pinia, you'll notice a new tab where you can explore your stores:
This plugin allows you to explore your stores, inspect the state and values of getters, serialize the state to save it as JSON or copy it to your clipboard... and even import state from a JSON file!
Developing an effective state management strategy in Vue applications might seem daunting at first, but it becomes much easier once you understand these core concepts.
Pinia offers an excellent solution to help keep your data well-organized and accessible from anywhere in your application. The developer experience is top-notch, and integrating it into your project is very straightforward. If you want to try it, head to the official website to discover more. Until next time!
]]>Initially envisioned as a counterpart to Next.js within the realm of React (hence the similarity in names), Nuxt has evolved to such an extent that it now stands out for its distinctive merits and an impressive array of features.
What are these features? Why should you choose Nuxt for your project instead of simply creating a Vue application? Today, I'll answer these questions by exploring some (just some!) of the powerful tools this framework offers. Let's jump right into it!
Let's start with a simple but powerful quality-of-life convenience. Take a look at this component code:
<script setup>import { ref } from 'vue'import FormInput from '@/components/FormInput.vue'import AppButton from '@/components/AppButton.vue'import { useAuth } from '@/composables/auth' const email = ref()const password = ref()const auth = useAuth() function onSubmit () { auth.login(email, password)}</script> <template> <form @submit.prevent="onSubmit"> <FormInput v-model="email" placeholder="Email" /> <FormInput v-model="password" placeholder="Password" /> <AppButton label="Login" /> </form></template>
We are importing a reactivity function from Vue, two custom components, and a composable. Wouldn't it be nice to auto-import them all?
Well, with Nuxt, you can. Out of the box, Nuxt auto-imports all Vue reactivity functions (ref
, reactive
, computed
, etc.) and lifecycle hooks (onMounted
, onBeforeUnmount
, etc.). Additionally, it auto-imports components, composables, and utility functions from the /components
, /composables
, and /utils
folders, respectively.
Considering that, our script tag could look like this:
<script setup>const email = ref()const password = ref()const auth = useAuth() function onSubmit () { auth.login(email, password)}</script>
This might appear like magic initially: "Where does this come from?" But I've discovered that it's quite intuitive, provided you adhere to Nuxt conventions. However, if you prefer to avoid following them, you always have the option to tailor the auto-imports folders in your Nuxt configuration file.
Auto-imports speeds up your development process because, if you want to use a component, you can just include it in your template. If you no longer need it, remove it from your template without forgetting to delete the corresponding import
statement as well.
If you've previously worked on a Vue application, you may have utilized Vue Router to match your app's URLs with the components responsible for rendering them. While configuring the router isn't complicated, Nuxt simplifies the process further with a file-based routing system.
You can create dedicated files for each route within the /pages
directory, and Nuxt handles the router configuration for you.
So this structure:
pages/ index.vue login.vue users/ [id].vue
...transforms into:
{ "routes": [ { "path": "/", "component": "pages/index.vue" }, { "path": "/login", "component": "pages/login.vue" }, { "path": "/users/:id", "component": "pages/users/[id].vue" } ]}
As you can see, we can utilize dynamic parameters by enclosing the file’s name (or part of the name) in square brackets, like report-[id].vue
. On that page, we can access the dynamic parameter as follows:
const route = useRoute()const reportId = route.params.id
You can learn more about dynamic routes in the official documentation.
Nuxt makes it incredibly easy to fetch data from your API. Simply use the useFetch
composable, which is automatically imported by default, to make the request:
<script setup>const { data: posts } = await useFetch('https://jsonplaceholder.typicode.com/posts')</script> <template> <div v-for="post in posts" :key="post.id"> {{ post.title }} </div></template>
This composable also comes with many other goodies that you can destructure if needed:
pending
: A boolean that indicates whether the data is currently being fetched.error
: An error object that is present if the data fetching process fails.refresh
: A function that can be called to execute the same request.status
: A string with status description ("idle", "pending", "success", "error").And you can use them like this:
<script setup>const { data, pending, error, refresh } = await useFetch('https://my-api.com/foo')</script> <template> <div v-if="pending">Loading...</div> <div v-else> <div v-if="error">Oops! Something went wrong</div> <div v-else>{{ data }}</div> <button @click="refresh">Load</button> </div></template>
Additionally, Nuxt provides two other functions to fine tune your data fetching: $fetch
and useAsyncData
. You can learn more about all these methods in the docs.
We've all been there: you've created an impressive Vue website, and all seems well until the client mentions, "Hey, the page isn't showing up on Google." That's because, by default, Vue applications render on the browser. When a search engine bot crawls your site, it only sees an HTML document with one empty <div>
and a bundle of JavaScript to render content within it. If SEO matters to you, the best strategy is still to render pages on the server and deliver complete, semantic HTML for each URL.
The good news is that Nuxt takes care of this automatically. The pages and components are rendered on the server, which returns the complete, crawler-friendly HTML.
"But hold on, isn't it slower to render each page on the server? Wasn't that the issue SPAs were meant to solve?" Good question. In traditional server-side rendered applications, the server sends the complete HTML page with every request, even if certain components remain consistent, like the header and footer.
Nuxt takes care of that as well. By default, the first page requested will be rendered server-side, thus making the content readable to search engines. And once the page loads in the browser, the app behaves like an SPA: when the user clicks on a link, it only re-renders the needed Vue components.
That’s the default, but Nuxt gives you full control over the rendering modes. Do you have a part of your website that never needs to be rendered on the server? Or some pages that need to be statically generated? Don’t worry, Nuxt has you covered. Take this example from the docs:
export default defineNuxtConfig({ routeRules: { // Homepage pre-rendered at build time '/': { prerender: true }, // Product page generated on-demand, revalidates in background '/products/**': { swr: 3600 }, // Blog post generated on-demand once until next deploy '/blog/**': { isr: true }, // Admin dashboard renders only on client-side '/admin/**': { ssr: false }, // Add cors headers on API routes '/api/**': { cors: true }, // Redirects legacy urls '/old-page': { redirect: '/new-page' } }})
Nuxt modules are plug-and-play packages you can install and configure in your app in a breeze. Need Tailwind, ESLint, Google Fonts, Pinia, or Supabase for your project? There's a dedicated module for each of these tools, and many more to explore.
Additionally, the Nuxt team has crafted first-party official modules, including:
Adding a module to your app is super straightforward. Take the Google Tag Manager module, for instance, you just need to install it:
npx nuxi@latest module add nuxt-gtm
... and then configure it on your Nuxt config file:
export default defineNuxtConfig({ modules: ['@zadigetvoltaire/nuxt-gtm'], gtm: { id: 'GTM-xxxxxx' }})
And voila! You're all set and ready to go.
Before we wrap up, let's give a shout-out to the Nuxt DevTools: an incredible developer toolbar that allows you to inspect and debug pages, components, imports, composables, and much more. You can even install or remove any Nuxt module right from this toolbar with just one click!
Nuxt has been my go-to tool when creating new Vue projects. It's versatile, fast, easy to use, and easy to learn. Today, we only scratched the surface of what Nuxt offers. The list could go on with lazy-loading components, the Nitro server, SEO enhancements, the powerful Layers, and more.
I hope this introductory article has sparked your interest to give it a try. Feel free to let us know if you'd like to see more Nuxt-related content on this blog. Until next time, Nuxt on!
]]>On one hand, caching strategies can offer considerable gains in performance. On the other hand, those gains may be at the cost of increased complexity within the codebase or infrastructure. Plus, there is always the threat that caching is providing out-of-date data.
When we think about the risk versus reward when it comes to caching, I recommend making considerate choices to ensure a successful payoff. The right caching choices have everything to do with your infrastructure, your skill set, and what your application does.
To learn more about making successful caching decisions for your application, let’s walk through the wide range of caching layers available in the Laravel ecosystem.
Applications are incredibly diverse in the problems they solve, so there is no singular ideal option for every application. Instead, it takes a solid understanding of the application’s business logic and the technologies it uses to decide what is right for each individual application.
I follow a few tenets when thinking about cache implementations/effectiveness.
Caching always introduces cost and risk. The complexity of the code is a cost, and the risk is that of potentially providing incorrect information to users. In many cases, then, the better solution is to optimize the application to reduce resource use or computation time so that we don’t have to rely on caching.
So, for the rest of this article, let’s assume my application is already reasonably optimized.
The thing I love about caching layers is that since no technology or pattern will solve all of my speed issues, I can pick and choose only the layers that are easy to implement and test. I can implement one now and add more later when it makes sense.
Below is a broad and undoubtedly incomplete list of caching layers available to most Laravel applications.
DNS and webhost level caching are those which happen completely in front of our Laravel applications; usually, the full request is cached based on the request URL and headers. This caching can significantly reduce the cost of rendering page HTML, including any lookups that were required to render that HTML.
This style of cache works well for pages that don’t require customization based on user data or can have that customization added after the fact (for example, via JavaScript). Many services exist that can manage this, and they have a wide variety of behaviors that you can tap into. Some primary examples:
These services excel at serving static content (similar to CDNs), and also save a lot of server load as they capture the incoming HTTP request and return a response without the request ever making it to my web server(s). Fewer requests to the server mean a lower load and snappier request/response cycles.
I can tune these caches and their interaction with my application to an incredibly fine detail. If you want an example, check out this post where Have I Been Pwned Operator Troy Hunt talks about how they use Cloudflare Cache Reserve.
The next level below this is to cache in your Laravel app at the closest-to-the-user level: the HTML. Similar to server and DNS caching, this caches the entire HTML, and allows you to avoid database calls and other server-intensive calls in many cases.
However, this type of cache gives you a lot more control of what is and isn’t cached. For example, I could choose to cache routes based on session or request data. I could cache customized page HTML per user per route, or per provided GET parameters.
With this type of cache I’ll get more control, and have less need to interact with the server, however, the gains won’t be as significant.
The two main packages of this type are Joseph Silber’s Page Cache and Spatie’s ResponseCache.
Caveats
There are a handful of out-of-the-box Laravel caching mechanisms that are super effective and session-friendly with very little added complexity. The Laravel Docs are pretty in-depth, so follow the links, if you’d like to learn more:
Laravel’s Cache tooling provides access to various cache services with a consistent API. This caching layer allows you to cache most any value, stored at a location defined by a string key—for example, cache “42” at the key “the-answer-to-life-the-universe-and-everything”. Laravel’s cache also offers many convenience methods for remembering, expiring, busting, locking, and throttling caches.
It’s perfect when I have a specific value that’s expensive to compute, and I know all the circumstances under which it should be busted, a term that means “the cached value is removed, so the application knows to recalculate the cached value.”
Let’s look at an example. If I’ve cached the total count of users in my system, this cache must be busted any time the app adds or deletes a user. Or, say, I’m caching a computation of the day of the past week that had the highest sales revenue; I’ll need to bust that cache at the end of the week at midnight.
One trick I like to use is to build helper objects—just simple plain old PHP objects (POPOs)—that help me manage the creation and retrieval of my key names, and also rules around busting.
Memoization is a cache that only lasts for the duration of a single request. The values are being cached either at a class level or sometimes at the request level, but they’re not stored on the server; they’re only held in memory for the current request, then discarded. Because of this, memoization is speedy to set up a working concept, and it’s very low risk because the value can’t be used for any other request.
Memoizing a resource-heavy calculation in a PHP class object is very simple:
class MyPhpClass{ protected $memoizedVariable; public function getResourceHeavyValue() { if ($this->memoizedVariable !== null) { return $this->memoizedVariable; } $this->memoizedVariable = $this->doingSomeHeavyCalculation(); return $this->memoizedVariable; }}
The beauty of this style is that it’s just a code pattern. As long as I’m using the same instantiated MyPhpClass
object, I can call $object->getResourceHeavyValue()
as many times as I like in this request cycle, and it won’t do a bit of that calculation again.
This kind of memoization also only functions for the current request cycle BUT at an application-wide scale. Once is a solid package for this, written by Taylor Otwell and released by Spatie with his permission.
You can call once()
and pass it a closure, and whatever happens in that closure, it’ll only be run once in that request regardless of how many times that chunk of code is called.
Regardless of how you implement it, memoization is perfect for a first attempt at caching in an application when I’m not ready to start digging into more intense caching.
Queries to the database are often the biggest offenders when it comes to page load time in web applications. I want to reiterate that it’s important to optimize your queries first, but if you do hit the point where you want to cache your database queries, you’ve got options.
This database-side caching mechanism can exist either in the database software interface (i.e., in front of the DB
and Model
facades) or in front of the database.
The most common pattern for caching database queries is to use Laravel’s cache, as we already talked about, to wrap expensive DB or model calls and cache the results.
$states = Cache::remember('states', $seconds = 3600, function () { return State::all();});
There are also database query caching services that handle the caching for you directly in front of the database; PlanetScale Boost offers a paid database query cache service, check them out for further details and Laravel-specific implementation details!
If I introduce caching to my application, I need to make sure that anything cached will have the cache wiped (“busted”) whenever the data it contains has been made stale—or that I set reasonable expiration times on the cache. If we don’t have cache busting, our application’s public presentation will go out of date every time the data in the application changes.
Here’s a picture of the effects of poor cache busting:
Emails will be sent inappropriately. False congratulations will be offered. Users will double-import transactions because a graphic didn’t update after they entered a transaction the first time. Users will pay twice or pay the wrong amount. Everyone will get your application for free.
And the way these bugs will show themselves will be unique to each application, which means searching the Internet for solutions to your bugs will be challenging.
Meanwhile, these sorts of issues will frustrate users far more than a slow loading page.
Once the bug is finally discovered, the customer service team will need help to debug this (often transient and time-dependent) bug.
Whenever I’m adding caching, I make time to complete all of these tasks:
Cache busting is important any time I’m defining a standardized key for storage. Let’s consider a piece of data like sales-this-week
. If I was tracking a user’s sales this week, I could store this cache value with a key users.[$user->id].sales-this-week
. That makes it easier to bust just that user, or just their sales this week, or all user-related data.
Cache eviction is a cache-driver level mechanism that removes stale or aged key/value pairs. This keeps the overall cache size smaller, resulting in less memory use and snappier cache response times.
One cache naming strategy I love to use is only possible with cache eviction, so I’m a big fan. Basically, if I can tell if a cache should be invalidated (marked as out of date) purely based on the data in an Eloquent model, I can use that model’s updated_at
value to automatically ignore past versions.
I can leverage cache eviction in this strategy by building my keys to include a timestamp, like users.[$user->id].[$user->updated_at].sales-this-week
. I’ll make sure that whenever a user sale happens, I $user->touch()
the User
responsible for the sale, which will update its updated_at
timestamp. The next time I try to get that user’s sales, the sales-this-week
lookup will not find a match, and will calculate and cache the new value for sales-this-week
.
Since I’m not intentionally destroying the stale key/value pairs for old user sales, I need to make sure that sale records are evicted to keep my cache snappy and my server’s memory free—which is why this particular method basically requires me to use cache eviction if I want it to be effective.
If I don’t want to configure my cache driver for cache eviction, in Laravel’s scheduler I could run php artisan cache:clear
for the driver or tags I’d like to clear at a reasonable frequency.
Thanks Tony for reminding me that cache eviction policies exist.
Requirements of data lifecycles may encourage me to use different cache mechanisms.
Consider an application that uses an external service to get the GPS coordinates of addresses entered into the system. When an address location can’t be mapped, the application caches that address string into an ignore-list
. It won’t execute an API call for that address to that external service again.
A more efficient solution is caching the valid data returned from that GPS service. It’s rare for an address to change its GPS location. Next time I try to look up that same address, I can use my local cached value! That way, I can save both the time of the HTTP request and the cost of utilizing that external service.
In both examples above, I don’t want this data destroyed when I type php artisan cache:clear
. This indicates that I want a more permanent data cache for these key/value pairs. It might be appropriate to use a different cache driver that I never clear, or a persistent cache on a separate server. If this data’s lifecycle might exceed the length of this application server’s lifecycle, storing that data in a permanent data store like the application’s database could make the most sense.
I hope this has provided an initial summary of the breadth and best use cases for various caching mechanisms in Laravel application development. I recommend caching in layers of your application where it offers a significant performance boost and where the complexity of implementation is reasonable.
In a future post, I’ll share some coding patterns that make using Memoization and Cache
more friendly to use.
There are a many ways to organize software projects, each with their pros and cons. Below, I’ll break down a few common methodologies, and share smarter ways of working that enable your team to remain flexible.
Software development methodologies are generally organized under two broad systems: Waterfall and Agile. Waterfall projects are built along a linear progression through each phase of development. Agile, on the other hand, has become more popular as a more iterative way of working, built around regular, short production “sprints”.
Based on 12 concepts taken from a core manifesto, the Agile framework includes two primary subsets, Kanban and Scrum. The Internet is full of articles that break down the specifics behind these approaches (here’s one I like); rather than recounting those details, let’s discuss how these processes relate to the best way to work on your project.
Rooted in manufacturing, Waterfall is an outdated, inefficient methodology based on assumptions about software development that don’t play out in reality. Fundamentally, a Waterfall framework relies on the assumption that a development team can know everything about a project in advance when planning a schedule, which is unrealistic in the real world.
Every person involved in software development projects is human, and no one reliably can predict the development, design, or communication cycles that inevitably change during a project. However, Waterfall codifies guesses about how long each stage will take into a six-month-or-longer process. This inevitably leads to missed deadlines, unmet dependencies, and development that’s inefficient and ineffective. A hallmark of a Waterfall project will be a frustrated team lead at the end of the project upset about why things didn’t go according to plan.
Despite all of its proven ineffeciencies, the Waterfall model just won’t die. Executive leaders often want a perfect diagram of what will happen in a project and in what order, typically laid out in a Gantt chart. And while Waterfall provides those details, that kind of rigidity doesn’t work in software development.
As its name suggests, Agile was designed as a flexible–or more agile–development methodology. Agile focuses on short sprints of development with the freedom to shift requirements along the way. However, from our perspective, what was intended as a set of ideas for how teams can remain lithe, flexible, and agile–notice the lowercase “a” in “agile”–has evolved into a rigid set of practices tied to dogmatic terminology. I like to call this the “Agile Industrial Complex.”
There are many useful tenets of the agile philosophy that are framed as ideas and suggestions, but which uppercase-a-Agile has codified into rigid standards and structures. Teams meet for daily standups and weekly or biweekly sprint planning and retrospective meetings–both great ideas, supported by agile philosophy–but these meetings are often held with a strict structure and an inflexibility of application that completely bely the core concepts of agile. Often, the Agile coaches who implement these terms and processes suggest in their pitches that you can implement their methodology without meaningful changes to your team, expectations, or project timeline.
In effect, Agile implementations often apply many of the same constraints on your team’s work as Waterfall does, but simply wraps them in a different language. Agile can become a variant of Waterfall development by another name.
Scrum is a more regimented offshoot of Agile. A project leader called the “scrum master” guides teams through planning, standup meetings, and retrospective phases of a project. Ultimately, Scrum is a common offender of what I described in the previous section: a rigid system that’s touted as being more flexible than Waterfall, but with the same problems in the end.
In theory, Scrum can be used in a truly agile way. But the more systemized each project stage becomes in estimating work “velocity,” or what can be delivered in a sprint, the more Scrum starts functioning like a Waterfall process.
Kanban is not technically an offshoot of Agile, but rather an existing project management workflow that fits neatly into agile values. Kanban has been around since the 1940s, and is centered around a board that categorizes tasks in columns like “to do,” “in progress,” and “done.” At Tighten, we appreciate Kanban’s focus on flexibility and iteration; it seems the closest of any “framework” to the core ideals of agile. More important than anything else, to us, is a tool that flexes as requirements change, and we’ve seen that to be true with Kanban more than any other “framework.”
To be clear, we don’t describe ourselves as Kanban practitioners—we simply attempt to work in an agile way. Much like the original agile manifesto, our workflow is about people more than processes. Agile, Scrum, and Kanban may not suit the needs of your app or your startup. What if your team could embrace agile principles without the rigidity of following a single framework?
When you talk to a development agency about being agile, you should be thinking about the original definition of the term. You and your development team don’t have to conduct a daily meeting, and you don’t need a Scrum master to be agile. But you do need to build your teams and processes in ways that protect their agility.
Like an athlete, developers should be light on their feet and able to change direction quickly as conditions demand. Your team should be able to work together and communicate efficiently without being governed by a specific system. And when we talk about this at Tighten, we like to think about agile with a lowercase “a.”
At Tighten, we’ve written our own manifesto about how we work. The manifesto itself may evolve over time, but it’s built around the core concept of remaining agile in each new client context. Ultimately, we want to ensure our workflow is flexible enough to support each client and their particular needs.
For example, by default we use Trello to facilitate Kanban lists for our projects. Our process often includes weekly check-ins where we talk about what we completed last week and what we’re doing next week. At any point, our process allows us to remain flexible and define what’s most important for the project and our client in real time.
However, we don’t always use Trello, and we don’t always have daily or weekly meetings. Each client’s needs are different, and we adapt our workflow to suit those needs—even as they may evolve over time. When your development partner remains agile with its workflow framework, you can create an adaptable, efficient, and innovative process. Agility provides the real foundation that sets your app on a path toward success.
If your start-up or other business is looking for guidance on how to optimize the way your project will be built, we should talk. We can ensure your team has the expertise and agility to produce the right results.
]]>Maybe you’re a founder and you built this app as you were learning to code, and you’ve always known you’d want experts to review your decisions. Maybe you’re concerned with the code you inherited from a previous team or offshore contractors. Or maybe you have an active development team, and you’re reading this thinking, “I’ve got a team of experts; why can’t they just audit their own work?”
Regardless of your current situation, there are many reasons why you may want a code audit:
In each of the scenarios above—even if your company has a team of experts—you have something to gain from bringing in an external team to perform a code audit.
Most code audits take place in the context of some sort of change. Your company may be scaling up; the organization of your team may be changing; you may be starting a new project, or taking on new funding.
Here are a few circumstances in which you may find an external code audit valuable:
You’re concerned your codebase isn’t good. More than any other factor, if you have any reason to be concerned about the quality of your codebase, that’s a good sign it’s time for an audit. This could be for myriad reasons: leadership cheaped out and went with an offshore programming team, you recently arrived at a new team and you’re concerned about their existing code, the bugs seem to be coming in faster than you can fix them... if you’re worried about your code, an external audit is your best next step.
You don’t have senior leadership able to assess your codebase. If there’s no one at your organization with the level of technical seniority necessary to understand what you’re doing right and wrong, it can often feel like you’re building your app in the dark. Many traditional leadership structures assume that somewhere there’s some brilliant nerd through whom all technical decisions filter—but in reality, many teams are comprised of a bunch of normal humans who are learning as they go.
You are at a turning point. When your organization is about to scale its user base, launch to a new and different audience, add a suite of new features, take on funding, increase the size of your team, or endure any other form of growth, it’s very common that the plans and decisions you’ll need to make require a deep understanding of the state of your code, its readiness for the future, and how much technical debt you’re carrying.
Your organization is changing internally. If you’re about to merge or have just merged, if your org chart is shifting around, if you’ve got a new CTO, or if there are any other organizational shifts that impact your dev team or the teams they interact, a code audit can help provide a shared organizational understanding of the state of the codebase. The results of an audit can help galvanize and unify your shifting team’s efforts moving forward.
You have security concerns. Security audits are an entity unto themselves, but if you are dealing with particularly sensitive user information, closing a funding round, or are just concerned about data and privacy in general, it’s not a bad idea to conduct a code audit with a specific focus on security.
Any situation where there will be organizational changes, new personnel, additional architecture, or increased scrutiny is the right time for a code audit.
Answer a few questions and learn more about what format will be the most helpful.
Download NowThe examples above are varied in their reason for needing a code audit, and likewise, they’re varied in what they require from a code audit. Each code audit should address a particular concern or set of concerns, driven by the specific stated needs of the organization requesting the audit.
This means the first step of an effective code audit is for your auditing team to work to understand your goals. Once the team understands your goals, they’ll dig into your codebase, your application, your logs, and your analytics, and possibly conduct interviews with stakeholders, devs, and users. They’ll take this information together and build a report which conveys some combination of analysis and recommendations.
Consider the following situations. The same type of code audit wouldn’t effectively meet every need. Instead, a tailored option provides a better, more insightful solution.
Situation one: You had an agency build an app because you don’t have an internal team to build one. In this case, your insight into the codebase will be limited. This can make it hard to assess your needs around maintaining your app moving forward.
The result: A code audit will shed light on the general quality and health of your code, and help you allocate programmers or hire accordingly.
Situation two: You have a lean team with a long-running app. Since the app has been in use for a long time, you aren’t sure how new features have been added over time, and your team has experienced turnover, resulting in a loss of institutional knowledge.
The result: A code audit will determine future points of failure to be aware of right now. It may also recommend refactors to make your code more consistent and future-friendly. Your smaller team can prioritize their work to shore up weak points immediately, before working on nice-to-have features.
Situation three: You have an extremely large codebase. Even with a robust internal team, there’s no bandwidth for an in-depth review that provides valuable information for both your development team and company leadership.
The result: An audit shows you a big picture view of quality, as well as granular issues that are a high priority. This means your team can get to work right away on the priority issues, and your leadership can receive an overall status report, both with an appropriate level of detail.
Beyond the situation, the outcomes of a code audit should also be customized to your team and their needs. A solo developer might be perfectly happy with a list of bullet-point observations and suggestions in an email. A venture-backed startup CTO charged with presenting a comprehensive report on the status of their software might want a streamlined, branded PDF to pass out to their board. And a non-profit that hired an agency to build their app might want an audit broken down into categories with a rubric for grading each category to get a comprehensive plan.
It’s nearly impossible for a developer to objectively scrutinize their own work. This isn’t because all developers are personally attached to the code they write—rather, because they’ve been exposed to the code for so long, it can be hard to work through it with a fresh perspective.
Plus, your developers likely don’t have the specific expertise and experience that comes with reviewing hundreds of different codebases on the regular. They may be highly skilled in coding—but that doesn’t always mean they are highly skilled code auditors.
Time and time again, we see the same issue when we conduct code audits: codebases that have been “future-proofed” with layers of code intended to set the application up for some future event, like a fundamental change in the database or framework used. This seems like a smart move to the team at the time, because it satisfies a big-picture business goal. But your internal team may not realize that goal for a decade, and expert auditors will identify if those extra layers of code are hurting your application’s performance in the present.
Some agencies that do code auditing also have a specific philosophy or way of working that is valuable for your team. If you run into problems because you’ve over-engineered all your features, working with an agency that believes code should be clean and simple could add a sorely missing perspective.
For example: Tighten’s way of working avoids over-abstracting and building architecture just for the sake of building architecture. We subscribe to the YAGNI philosophy: You Aren’t Gonna Need It. Our auditors are experts at seeing where something complex can be simplified, because simple code just works better over time.
To figure out if you need a code audit, write down what’s keeping you up at night about your project or application.
Next, ask yourself if you or your team can reliably address those concerns. Do you have the skillset? Do you have the right perspective? And most importantly, do you even have the time?
If you can’t enthusiastically say “yes” to all those questions, it’s best for your team, your application, and your peace of mind to have an external code auditor come in and poke around.
Whether you require a list of suggestions, a hefty technical analysis, or a branded PDF with an executive summary, having an expert set of eyes on your codebase is never a bad thing.
]]>Programmers, technical leads, and project managers always test our applications by hand as we write them. Write a feature, try the feature out in the browser, make sure it works the way we want. Usually our clients do too—if you’re paying to have software built for you, you’ll usually test how it works when it’s delivered.
But this manual testing only covers a small portion of the things that need to be tested in our web apps. And if we don’t have full coverage, the software (and its owners and users) could be exposed to data loss, security troubles, financial loss, or even legal ramifications. Manual testing has a place in the software development world, but for the broadest coverage, you need to add automated testing coverage.
Manual testing is when a human being tests the application by navigating their way through it (usually in a browser). This human may be a programmer, the product owner, or a paid quality assurance (QA) engineer. If it’s manual, it requires a human to do it.
Automated testing, on the other hand, relies on software to run the tests. This software runs against scripts (automated tests) that either programmers or QA engineers have written, and the scripts can be run thousands of times a day, in different environments, manually or automatically.
Neither manual nor automated testing are perfect.
Manual testing relies on humans, and there are many benefits of a human interacting with your application; humans are, however, slower, more likely to make mistakes, and less capable of testing a matrix of different configurations for each test. It’s nearly impossible for even a full-time worker to examine every single aspect of a software tool every time it’s prepared for release.
Additionally, some tests don’t lend themselves well to manual testing. Security, data, and privacy issues are harder for humans to test for because the potential problem areas aren’t immediately apparent from a human perspective. For example, a security issue might not follow a typical user path. It could be buried deep in another path or function.
However, the human perspective that can’t identify a security breach bug is valuable in other ways. For example, if your web app has a graphic user interface that leverages 3D images and complex animations, an automated test can only tell you if it functions “properly.” It can’t decipher if the graphics are realistic. That’s for a human to decide.
Ultimately, you need both kinds of tests. Unfortunately, automated tests seem harder to set up, so they are typically overlooked. However, automated tests are an important component in every dev team’s arsenal to achieve clean code and functional web apps.
It’s easy for organizations to eschew automated testing. If you have an existing codebase with no automated tests, there can be an upfront cost to setting it up for automated testing, and writing tests that cover a decent amount of the codebase. Additionally, if you already have a manual team, it requires a culture shift in addition to an operational change.
However, if you’re not using automated tests, you’re opening your web applications—and yourself—to unnecessary risks and unfortunate side effects.
Wasted Time and Resources Relying solely on manual testing means you’ll have to build and maintain an entire QA department, often including multiple full-time roles. No matter how smart, talented, and hard-working these folks are, the reality is you’ll now be spending time, money, and energy managing this team.
We’ve seen first-hand how challenging it is to stand up a team of separate testers; QAs butt heads with each other and with programmers, departments subscribe to different testing philosophies, and ultimately, your test results suffer. Further, humans are good at creative thinking, and the sorts of things which can’t be automated, but manual-only testing will require humans to do repetitive, boring tasks, which isn’t good for anyone.
Costlier Upgrades Every time you upgrade any technology your web application relies on, it has the potential to introduce major bugs or security holes into the final application. As a result, every upgrade means the entire functionality of the app, including every edge case and previously-fixed bug, needs to be tested again. In a system without an expansive automated test suite, upgrades are costly and nerve-wracking—and therefore performed much less often, which is bad for the application and introduces even more technical debt and security risk.
Security Risks and Bugfix Regressions One of the best ways to test for security risks and other unexpected application states is to run your application through hundreds of different potential scenarios every time you make a minor change. Every time a potential security risk is identified, your team will write an automated test to prove that security risk is patched. Every time a new bug is identified and fixed, your team will write an automated test to prove that bug is still fixed.
A lack of automated tests means security risks are significantly easier to introduce or miss in the first place, and bugs are much more likely to regress to a state before they were fixed. Automated tests help you stay away from being in the news as the subject of the latest hack or data privacy leak.
Poor User Experience Bugs are simply a part of life in software development. Thankfully, users have come to expect that some number of bugs come along with any application. However, if you have more bugs than the competition? If you fix bugs and they become un-fixed? If it takes forever for your team to fix bugs because there are so many? That could have a serious impact on your users’ perception of your application and your organization.
As I wrote about in the previous example, bugs are more likely to regress to a broken state if you don’t have automated tests ensuring their continued removal. Furthermore, writing tests actually produces code with less bugs in the first place; your engineers have to encode the business logic into the tests, which makes them think more about what this code should do, and well-written automated tests will work through a suite of potential applications and edge cases for that feature, testing far more use cases than a programmer or product lead often would in the normal development flow.
Across the board, automated testing produces applications with less bugs, which produces happier users and a better reputation for your app and your organization.
Major Business Issues A buggy, security-risk-laden application isn’t just a problem for the users. It also could introduce massive costs to your organization. You’ll need more support, more QA engineers, more programmers to fix things, and potentially even more PR and legal to cover the outcome of the bugs’ impact.
If your bugs or breaches are privacy or financial related, you may find yourself in a situation—as a result of a simple bug—that could threaten to end your entire company. Maybe your manual test didn’t catch a bug that affects the automatic billing function. Six months later, your accounting team notices something isn’t adding up—and you discover your app has been billing canceled subscribers for the past eight weeks. Now you have to reimburse users for the erroneous payments. Two months worth of invoices is a hefty amount. If you didn’t have cash flow issues before, you will now.
Automated testing can have a cost to set up in an existing app; but over time it costs far less than manual testing. It’s also more reliable and much more expansive in its capabilities.
The best situation is to write automated tests as you go. Programmers are the best people to write the tests for the code they just wrote, but bigger organizations may choose to have QA engineers write the tests instead.
If your app is already running, it’s not too late! You can add tests, or bring in a team like Tighten to add a testing framework and some existing test coverage and teach your team how to test.
No matter what, if you are running a web application, you should be running automated tests.
]]>The most common predictor of failure in software projects is a lack of alignment between the development team’s efforts and the purpose of the project. This is where a project purpose statement comes in.
It’s not enough for your team to know the general idea of the project, or the laundry-list of expected features. Your software project needs a clear, succinct statement to direct everyone’s effort.
A good project purpose statement is a concise, repeatable mantra that states the ultimate reason your project exists. Below we’ll cover why you need one, how to create one, and how it benefits your team and the project.
Everyone knows what it’s like to work on a project that has no clear purpose. It’s frustrating and demoralizing to work hard without an understanding of what your effort is for. Without a clear purpose, projects don’t have a throughline that connects tasks to end goals and to the overarching reason a project exists.
In addition, projects that lack well-articulated direction are especially vulnerable to scope creep. When you do have a laser-focused purpose statement, it’s easy to evaluate whether a task request is within scope. Without it, your software project could become your development team’s wild west. And no one, especially not your client, wants a lawless codebase.
When a project is off and running without a clear direction, it could implode or grow haphazardly until its original purpose is unrecognizable. For example, take Twitter’s recent foray into “Twitter Blue.” Who was it for? Why did they build it? From the outside, it appeared that no one really had those answers, and the project suffered because of its lack of purpose.
A project purpose statement is a crucial tool for creating alignment. A crisp, articulate purpose statement is a clear internal sales pitch that keeps your entire team on the same page, brings a sense of direction to each part of the project, and connects even the most arcane tasks to the end goal.
Workshop your project purpose statement to set your project up for success.
Download NowWho should write the project purpose statement? Ideally, the person who conceived of the project should be the one to get the ball rolling on the purpose statement. They know exactly why the project exists — or at least they should. Otherwise, the product lead or team leader or someone else with a similarly broad view should take on the responsibility.
When should you write it? The statement should be crafted at latest when a team is assigned to the project — whether that’s before the official kickoff or at the initial meeting. Starting off with an articulate project purpose statement ensures the whole team has the same goals in mind from the jump.
What should it contain? A project purpose statement should take a similar approach to an elevator pitch. It needs to convey its entire meaning clearly and quickly, with as little corporate fluff or jargon as possible. To be effective, your statement should answer the following questions:
A. What are we building? B. Who are we building it for? C. What concrete outcome are we trying to achieve?
This should take the form of A for B so that C.
Here are a few examples:
We’re building a A) CRM tool for B) independent HVAC businesses to C) convert 40% more of their inbound leads to customers.
We’re building an A) assistive mobile app for B) substitute teachers to C) help them build immediate rapport with students by remembering their names.
In addition to answering those three questions, a project purpose statement needs to be simple, concise, and memorable. You want your software development team to be able to commit this phrase to memory and be able to repeat it verbatim.
To achieve that goal, make sure your statement is:
There’s no room for verbosity here. Anyone can explain something in a million words. A good project purpose statement is short and declarative.
The challenge with brevity is that you will have to reduce the word count and leave something important out. Even though that may seem contrary to the entire point of the project purpose statement, it’s not. This statement is for your internal team to stay aligned on the purpose, not to detail every function of the project.
It’s unlikely that your project only does one thing, so a tightly-scoped statement can feel reductive. Though the ultimate feature set is likely to be wider than what’s contained in a single sentence, your project purpose statement needs to focus on the one thing your product must do in order to be what it claims to be. You may have to leave out parts that feel important, but that’s okay. What’s crucial is that the statement defines the soul of the product in a snappy, easy-to-remember way, so it can act as an internal compass that keeps the team rowing in the same direction.
Save the big ideas for another exercise. Your project purpose statement should be concrete and tangible. A good litmus test for concreteness is whether or not the desired outcome can be measured quantifiably. If you don’t have hard data to work with at the very beginning, consider making an educated guess. The number doesn’t matter as much as the clear line between the A, B, and C elements of your project purpose statement.
A project purpose statement needs to outline the value the project seeks to deliver. It’s hard to state in so few words the exact and entire value of a project, but it shouldn’t be impossible.
If you can’t state the specific value you’re trying to deliver in a plain and understandable way, you might need to take a step back and reconsider whether you’re actually ready to start building.
It sounds like the most obvious thing in the world: Don’t start a complex, expensive endeavor until everyone involved knows what they are building, who it’s for, and what it seeks to achieve. Amazingly, though, the vast majority of software projects don’t start with a clear, shared sense of purpose. Even fewer have a concise statement of purpose that every team member can rattle off verbatim.
This statement isn’t just a good exercise to get your team on the same page. It’s beneficial for the project outcomes, too.
It mitigates scope creep by making it easy to understand which features are necessary, and which you can talk yourself (or your coworker) out of.
It sets clear client expectations. A project purpose statement gently forces your client to reveal important details, such as how the product should help the end-user, and what an ideal outcome looks like.
In our experience, it’s the difference between a successful project and one that spirals out of control. A clear, articulate statement of purpose will help you be the team leader who forges ahead with a clear sense of direction, avoiding detours and setting your development team off on a path to success.
]]>Finding the right agency isn’t as cut-and-dried as finding someone available that meets your budget. Those are two extremely important factors, but there are many more to consider.
In our guide on how to search for and find the right agency for your project, you’ll learn:
Hiring an agency can be daunting. Download our eBook now to simplify your decision making process.
Polymorphic relationships are such a pattern—a powerful tool that can help us avoid complicated code paths when we’re dealing with similar related items.
Wikipedia defines polymorphism as the provision of a single interface to entities of different types.
Luckily, Laravel offers support for polymorphic database structures and model relationships. In this post, we’ll pick up where the documentation leaves off by presenting several patterns that provide us the opportunity to eliminate conditionals.
Let’s start with the one-to-one example outlined in the documentation. For a quick refresh, a blog post has one image, and a user has one image, and each image belongs to only one (either blog post or user).
After generating the database and model structure, we have the following tables: posts
, users
, and images
.
We also need to grab the Image
model from the docs:
<?php namespace App\Models; use Illuminate\Database\Eloquent\Model;use Illuminate\Database\Eloquent\Relations\MorphTo; class Image extends Model{ public function imageable(): MorphTo { return $this->morphTo(); }}
Let’s say we have a page displaying all of the images on our site; each image will have a caption below it. For images attached to a user, we’ll display this caption: {$user->name}’s email was verified on {$user->email_verified_at}
.
For images attached to a post, we’ll display this one: {$post->name} was posted on {$post->created_at}
.
Once we have a collection of $images
, we can loop through them and call the imageable
relationship to get the thing the image is attached to. Now we have a decision to make. We know our imageable
is either a User
or a Post
, but since there are two different ways to display the caption, we might be inclined to check the type in our view.
@foreach ($images as $image) @if ($image->imageable instanceOf App\Models\User::class) {{ $image->imageable->name }}’s email was verified on {{ $image->imageable->email_verified_at->format('n/j/Y') }} @elseif ($image->imageable instanceOf App\Models\Post::class) {{ $image->imageable->name }} was posted on {{ $image->imageable->created_at->format('n/j/Y') }} @endif@endforeach
Unfortunately, we have just leaked our abstraction by exposing the model types to the view. Furthermore, whenever another model becomes imageable, we’ll need to add another condition to this if
block.
We can hide these model-specific implementation details by extracting caption
methods to the User
and Post
models.
class User extends Authenticatable{ public function caption(): string { return str('{name}\'s email was verified on {date}') ->replace('{name}', $this->name) ->replace('{date}', $this->email_verified_at->format('n/j/Y')); }}
class Post extends Model{ public function caption(): string { return str('{name} was posted on {date}') ->replace('{name}', $this->name) ->replace('{date}', $this->created_at->format('n/j/Y')); }}
Now we can clean up our view:
@foreach ($images as $image)- @if ($image->imageable instanceOf App\Models\User::class)- {{ $image->imageable->name }}’s email was verified on {{ $image->imageable->email_verified_at->format('n/j/Y') }}- @elseif ($image->imageable instanceOf App\Models\Post::class)- {{ $image->imageable->name }} was posted on {{ $image->imageable->created_at->format('n/j/Y') }}- @endif + {{ $image->imageable->caption }} @endforeach
Now that we’ve plugged the leak in our abstraction, there are a few patterns that I’d like to introduce to help keep things encapsulated as we introduce additional types.
First, our caption
method has helped us see something that these models have in common. Post
s and User
s can have images attached to them—as the imageable
relationship implies, they are imageable. Let’s make this official with an Imageable
contract.
<?php namespace App\Contracts; use Illuminate\Database\Eloquent\Relations\MorphOne; interface Imageable{ public function image(): MorphOne; public function caption(): string;}
-class User extends Authenticatable +class User extends Authenticatable implements Imageable { // ... }
-class Post extends Model +class Post extends Model implements Imageable { // ... }
As new polymorphic types implement the Imageable
contract, we’ll be required to implement any missing caption
methods.
In addition, we now have a type hint we can add to methods that depend on an Imageable
object. Let’s say our app has a Team
model that can feature Imageable
items to display on a team page. The method might look something like this:
class Team extends Model{ public function feature(Imageable $imageable) { $this->features()->save($imageable); }}
The feature
method doesn’t need to know or care what type of object $imageable
is as long as it implements the Imageable
contract.
Finally, using $image->imageable->caption()
could be improved. Treating the image as having the caption, which can be derived from its Imageable
object, would be a more readable alternative.
<?php namespace App\Models; use Illuminate\Database\Eloquent\Model; use Illuminate\Database\Eloquent\Relations\MorphTo; class Image extends Model { public function imageable(): MorphTo { return $this->morphTo(); }+ + public function caption(): string+ {+ return $this->imageable->caption();+ } }
Now, our view looks a bit more readable:
@foreach ($images as $image)- {{ $image->imageable->caption() }} + {{ $image->caption() }} @endforeach
Now, let’s move on to many-to-many relationships. Again, we’ll start with the examples in Laravel’s documentation; in this example, both posts and videos can be associated with tags, and tags can be associated with many posts and/or videos.
Per the documentation, we’ll add tables for videos
, tags
, and taggables
.
<?php namespace App\Models; use App\Models\Post;use App\Models\Video;use Illuminate\Database\Eloquent\Factories\HasFactory;use Illuminate\Database\Eloquent\Model;use Illuminate\Database\Eloquent\Relations\MorphToMany; class Tag extends Model{ use HasFactory; public function posts(): MorphToMany { return $this->morphedByMany(Post::class, 'taggable'); } public function videos(): MorphToMany { return $this->morphedByMany(Video::class, 'taggable'); }}
The Image
model from our one-to-one example has an imageable
relationship to get the imaged thing, but the Tag
model currently provides no way to get the tagged things as a single collection.
We could add a taggables
method to merge the posts
and videos
collections:
<?php namespace App\Models; use App\Models\Post; use App\Models\Video; use Illuminate\Database\Eloquent\Factories\HasFactory; use Illuminate\Database\Eloquent\Model; use Illuminate\Database\Eloquent\Relations\MorphToMany; class Tag extends Model { use HasFactory; public function posts(): MorphToMany { return $this->morphedByMany(Post::class, 'taggable'); } public function videos(): MorphToMany { return $this->morphedByMany(Video::class, 'taggable'); }+ + public function taggables()+ {+ return $this->posts->append($this->videos->toArray());+ } }
However, there are two problems with this approach.
imageable
, taggables
doesn’t return a relationship. You can’t eager load it, or chain query methods off of it, or call it as a property such as: $tag->taggables
At this point we might be tempted to make a Taggable
model for the taggables
table so we can relate to it from Tag
.
public function taggables(): HasMany {- return $this->posts->append($this->videos->toArray()); + return $this->hasMany(Taggable::class); }
The problem with this approach is taggables
doesn’t actually return the tagged things. It returns a mapping to the tagged things via the taggable_id
and taggable_type
columns but not the things themselves.
We really want to replicate the pattern introduced in the Imageable
model by having the taggables
relationship return the things that implement that contract. This results in returning a mixed collection of posts
and videos
.
Note: It might seem strange to have a collection of more than one model type, but remember that we’re keeping this detail encapsulated. Calling code should only be aware that it is a collection of taggables.
So how in the world do we do this?
Jonas Staudenmeir wrote a fantastic laravel-merged-relations package which adds support for representing related data from multiple tables as a single SQL View. After installing the package, we need to make and run the following migration:
use App\Models\Tag;use Illuminate\Database\Migrations\Migration;use Staudenmeir\LaravelMergedRelations\Facades\Schema; return new class extends Migration{ public function up(): void { Schema::createMergeView( 'all_taggables', [(new Tag)->posts(), (new Tag)->videos()] ); } public function down(): void { Schema::dropView('all_taggables'); }};
Now, we can import the HasMergedRelationships
trait and update our taggables
relationship.
<?php namespace App\Models; use App\Models\Post; use App\Models\Video; use Illuminate\Database\Eloquent\Factories\HasFactory; use Illuminate\Database\Eloquent\Model; use Illuminate\Database\Eloquent\Relations\MorphToMany;+use Staudenmeir\LaravelMergedRelations\Eloquent\HasMergedRelationships; class Tag extends Model { use HasFactory;+ use HasMergedRelationships; public function posts(): MorphToMany { return $this->morphedByMany(Post::class, 'taggable'); } public function videos(): MorphToMany { return $this->morphedByMany(Video::class, 'taggable'); } public function taggables() {- return $this->posts->append($this->videos->toArray()); + return $this->mergedRelation('all_taggables'); } }
We can test this relationship with the following simple test:
/** @test */public function fetching_tagged_items(){ $tag = Tag::factory()->create(); $post = Post::factory()->tagged($tag)->create(); $video = Video::factory()->tagged($tag)->create(); $taggables = $tag->taggables()->get(); $this->assertTrue($taggables->contains($post)); $this->assertTrue($taggables->contains($video));}
Note: In the above example, both the
PostFactory
andVideoFactory
classes contain the following helpful method:
public function tagged(Tag $tag){ return $this->afterCreating(fn ($model) => $model->tags()->attach($tag));}
So far, we have covered working with a single type stored in different tables. Next, we’ll consider how to store multiple types in a single table. This pattern is called Single Table Inheritance, and the Parental package was created to implement it in Laravel.
To keep things simple and build from our previous examples, let’s say we need to distinguish between guest posts and sponsored posts. We’ll add the following migration to store the guest and sponsor data.
Note: These would probably be foreign keys of some type; however, we’ll use strings for this example.
public function up(): void{ Schema::table('posts', function (Blueprint $table) { // Defines the type of each post, "guest" or "sponsored" $table->string('type')->after('id'); // Could theoretically store guest and sponsor data $table->string('guest')->nullable(); $table->string('sponsor')->nullable(); });}
Now we can make GuestPost
and SponsoredPost
models to cover the two types and update the Post
model to define its child types with Parental.
<?php namespace App\Models\Posts; use App\Models\Post;use Parental\HasParent; class GuestPost extends Post{ use HasParent;}
<?php namespace App\Models\Posts; use App\Models\Post;use Parental\HasParent; class SponsoredPost extends Post{ use HasParent;}
<?php namespace App\Models; use App\Contracts\Imageable; use App\Contracts\Taggable; use App\Models\Image;+use App\Models\Posts\GuestPost; +use App\Models\Posts\SponsoredPost; use App\Models\Tag; use Illuminate\Database\Eloquent\Factories\HasFactory; use Illuminate\Database\Eloquent\Model; use Illuminate\Database\Eloquent\Relations\MorphOne; use Illuminate\Database\Eloquent\Relations\MorphToMany;+use Parental\HasChildren; class Post extends Model implements Imageable, Taggable { use HasFactory;+ use HasChildren; + + protected $guarded = [];+ + protected $childTypes = [+ 'guest' => GuestPost::class,+ 'sponsored' => SponsoredPost::class,+ ]; // ... }
The above will result in another mixed collection when calling Post::all()
.
Note: Polymorphism is all about encapsulating what differs by abstracting what is the same. In our one-to-one and many-to-many examples, we defined our models’ sameness first by extracting an interface and next by merging our relationships. Here, however, all of our models are posts. Where they differ is in what type of post they are.
Now that we have our mixed collection, let’s say we want to have a line under each post title crediting the source of the post. The guest posts would read This post was guest written by {guest name}
while the sponsored posts read This post is sponsored by {sponsor name}
. This is as simple as defining a credits
method on both models.
<?php namespace App\Models\Posts; use App\Models\Post; use Parental\HasParent; class GuestPost extends Post { use HasParent;+ + public function credits()+ {+ return "This post was guest written by {$this->guest}";+ } }
<?php namespace App\Models\Posts; use App\Models\Post; use Parental\HasParent; class SponsoredPost extends Post { use HasParent;+ + public function credits()+ {+ return "This post is sponsored by {$this->sponsor}";+ } }
Note: If we want to enforce the credits method, we could implement a
Post
contract, similar to how we did with theImageable
andTaggable
contracts.
Whatever varying conditionals you find yourself working with, patterns can be applied to treat them all as if they were the same. I hope this post inspires you to replace a few conditionals with polymorphism.
]]>The client is happy, and the team can finally relax and wipe the sweat from their brows, and say—“wow, that's a wrap!” And then, it sets in, you start to wonder: what’s next? But before you turn to whatever is next, instead, after the congratulatory high fives, you should take some time to reflect on your completed project, and discuss the key takeaways that can be turned into learnings for a future project.
Want to know how we do it? Here are ten tips that can help you facilitate your next project retrospective:
While the number of people on a project can fluctuate depending on the deliverables in any given week, it's important to bring every contributor together when it comes to the end of a project. Doing so gives everyone involved an equal opportunity to share their experiences as a team, and no one's contribution is left out.
Don't wait. One project ending means another is ready to start, so it's best to hold your retrospective soon after the project has finished. Doing so helps ensure everything is fresh and top of mind for everyone.
The goal of a retrospectives is to identify what did and didn't work throughout a project, and obtain key takeaways that can be used for future projects.
Once the finish line is crossed, let your team know that the retrospective is all about commemorating and resurfacing key takeaways for a focused discussion. It's an opportunity to intentionally reflect on a project one last time with a focus on actionable outcomes.
As with any meeting or workshop, sending an agenda a few days in advance goes a long way to set everyone up for success. Encourage your colleagues to think about what they've learned during the project, identify areas where processes could be improved, and be prepared to voice those ideas during the group discussion.
One helpful way to get the conversation going is for the facilitator to come prepared with a handful (3-5) of conversation starters to help focus the discussion.
Tighten uses Trello for everyday project organization, and we are 100% remote, so when I am the facilitator, I share my screen over Zoom, and spin up a new Trello board–using a Kanban board style–that lists out one conversation starter in each column, and then document each person's response. This gives us an on-the-spot visual while we chat about similar patterns and group consensus.
Here are some of the conversations starters that we use:
Think about timeboxing each conversation topic, so the conversation can have time for multiple topics, and stay focused.
And remember, it's okay for the discussion to veer away from your conversation starters. They’re not set in stone and meaningful discussions change course organically. The most important thing is to make it easy for everyone to participate without feeling overwhelmed or put on the spot.
This should be a comfortable and safe space for everyone. Everyone should be prepared in advance to participate and share ideas. This is a discussion of equal parts celebrating and lessons learned. Go ahead, go around the room, backward and forwards, and equally balance the time so it's a group discussion, not a monologue from a single person.
Nobody enjoys being ushered into another project without pause to all of their hard work on their project that just wrapped up. It does a disservice to everyone's hard work. You couldn't have done it without each other, and remember to celebrate those valuable moments. My colleague, Tammy, always brings a balloon filled with confetti that we can pop at the end of the meeting as a way to celebrate the completed project. Huzzah!
As a facilitator, one of your jobs is to aggregate the recurring themes and summarize the actionable learnings into a brief summary that can be shared with both your project team and teammates that weren’t on the project. As new projects move in quickly, documenting this for all projects leaves a helpful paper trail everyone can refer back to—and depend on—when setting up new project structures and processes.
Outside of the project setting and internal team retrospective, it's good practice to solicit additional client feedback and evaluation after a project ends and celebrate the journey that you shared together. Having a loop of feedback and collaboration always helps to build an updated project strategy for the future. It ultimately makes the next project run even more smoothly and builds on your working relationship with your client.
Help your colleagues learn from one another. Of course, not everyone from the whole team can be on a single project, but project learnings—if technical, process-building, or another area—could apply to colleagues on their own projects, that you may or may not also be on. Sharing your key takeaways with everyone is beneficial as they don't start and stop with any individual team.
It’s a wrap! Great work! Take a walk outside before you move onward to the new project.
We hope that some of these ideas help you lead a meaningful project retrospective. If you have other ideas, we’d love to hear about them!
]]>It’s a well-researched fact that throughout a typical forty-hour work week, most workers are not able to actually accomplish forty hours of actual work. In The Mythical Man Month, now an industry standard, Fred Brooks shows how simply adding more developers to a delayed project simply delays it even further.
So, if speeding up your project isn’t as simple as throwing more hours or developers onto the project, what can you do to add capacity?
There’s an upper limit to what an individual can accomplish in a given work week, and even your best, most productive developers will hit that limit before forty hours.
As a result, our entire industry is out of sync when it comes to the trade of dollars for meaningful time spent on development work. Billing any programmer at forty hours a week means you’re paying for far less than what you think, and everyone just shrugs that off as a cost of doing business.
To get back in sync, more hours or more developers isn’t the answer. Many in the industry are moving to a four-day work week, or thirty-two billable hours. While that may solve the issue of wasted time, it doesn’t do much to level up a team’s capabilities.
At Tighten, we have a way of working less hours (and billing less!) to each client, but making each of those hours more valuable to our clients. It’s called 20% time.
20% time is how we structure our work weeks here at Tighten. Our developers spend four days working on client projects. On the fifth day, what we call 20% time, developers are not booked on client work.
Instead, on these days, the team is free to focus on their own growth and contributions to the broader programming community. This is valuable — not only to the team itself — but also to the projects we work on.
We’ve written extensively on the benefits 20% offers to employees–how it can decrease burnout and prevent turnover, how it makes them happier and more content, and how it is a powerful tool for career growth. We care about these things for our team members, but those benefits are also passed on to our clients in three specific ways:
When our developers intentionally set aside eight hours a week to work on their own learning and professional projects, it has a powerful impact on their abilities on client projects, as coders, leaders, and thinkers.
The hours our team spends writing, researching, and building and playing with new tools results in finely-tuned skills and deep banks of knowledge. Our team stays well informed about broad movements and ideas in the industry, learning from and interacting with leading thinkers in their learning and open source development time. As they build tools and write blog posts and live stream and deliver talks, their ability to reason and communicate grow and flourish beyond that of any common developer. This serves their career trajectory well, and client projects reap the benefits, too; there are no programmers out there with a broader set of experiences and knowledge.
Our developers often use 20% time to learn about tools and methodologies they haven’t had a chance to experience in their day-to-day programming.
For example, we use Filament and Livewire on many client projects today, but the first knowledge our team developed of these tools was a team member spending 20% time working with them on open source projects. That initial 20% time foray into those tools has led to an entire team confident in working with those technologies; even if they’re not a Filament expert, they can reach out to the Filament expert in our Slack and have an answer in minutes.
This growth in 20% time leads to less time-wasting experimentation on client projects. No client ever has to be the recipient of a disappointing message like “we tried GraphQL on your project and then learned at the end it was a terrible tool for this use case;” we figured that out long before we came to determining that client’s needs.
It might seem counterintuitive that a 32-hour work week with one day of 20% time is more productive than a typical 40-hour work week. But it lines up with what we, and many others, have seen: no one has forty hours of fully dedicated heads-down work in them. It’s just not how people are built.
Reducing the billable week by 20% doesn’t decrease the amount of work getting done; it just concentrates it into four days. The fifth day of the week is certainly full of intentional learning, but it’s also a space for our team to work on passion projects, lead initiatives, and grow; it gives them time and space to decompress, organize, and recalibrate. This results in a team with less burnout and lower turnover. Because of this, four days of client work is more manageable and productive because the team on your project is refreshed and ready to hit the ground running come Monday.
If a professionally-developed, highly skilled, and more productive team isn’t enough to tell you that 20% is a value-add, not a detriment, we probably won’t convince you with one more paragraph. But hear us out:
The impact on your workflows and deadlines is net positive. Just because our 32-hour work week has eight hours “less” than the other guy’s, it doesn’t mean you're getting less or lower quality work.
In fact, it’s highly likely you are paying for the same amount of work getting done. Our philosophy builds in one day a week for growth. The idea is that if you spend 20% of your time growing and learning and resetting, the remaining 80% is more productive.
It sounds idealistic, but in our experiment, we’ve seen the results that confirm our hypothesis. We’ve been doing it since 2016. If it wasn’t working, we would have scrapped it. The quality of the work matters more than the quantity of hours worked. Developers with the privilege of 20% time are able to stay on top of their workload, on top of current industry trends — and most importantly: on top of your project.
]]>At a mission-driven organization, folks from many different departments may be very connected to the outcome of your web app project. This means more people in meetings, more opinions on features and functionality … you get the idea.
This abundance of voices in the room can lead to a lot of false starts. To get your team on the same page from the jump, here are three things to think about when you’ve identified that you want to build a web application.
One particularly tricky aspect of building a web app as a nonprofit is the hoops you may have to jump through just to get your project started. Many nonprofits use a request-for-proposal (RFP) process to select a vendor, which is often an ineffective way to choose a right-fit development partner.
Asking an outside agency to submit an elaborate proposal without understanding your organization’s internal culture and processes doesn’t work very well. You need to start a conversation, not just amass a bunch of proposals from companies you barely know.
Though the RFP process may be less than optimal, it does force you to carefully articulate your project’s goals and requirements, which is something you should do anyways. Make sure your team is aligned on the project’s purpose before you start talking to agencies.
Gathering all the different ideas about features is the fun part. But before you let your head float into the clouds, be sure to go through the following basic steps to ensure your project starts off on the right foot.
Living in the nonprofit world, you already know funding has … complexities. Sometimes money comes in a one-time lump sum, other times it’s spread out as cyclical funding — and sometimes grants have a firm cap or other strings attached.
Even if you have ample funds at your disposal right now, building a successful web app is not a “one-and-done” process. There are maintenance and upkeep costs to consider, as well as the potential for additional feature development down the road. If you burn every penny on the initial development, you might end up with a buggy or incomplete app in the wild with no immediate way to fix it. And, perhaps more importantly, you’ll miss the important opportunity to iterate on the initial idea as feedback comes in.
Talk to the folks with the purse strings about the funding you’ll need beyond the initial phase. Explain to them that software is an investment that needs to be maintained and supported over time. If possible, have them commit to a cyclical budget to support your new application.
If the budget for your project is modest and truly finite, you might need to consider dialing your scope back, or even postponing the project. You’re better off scrapping the project than trying to do it on the cheap.
So as you prepare your budget, consider future expenses as well as immediate ones. Realistic, holistic budgeting is the first step to building an app that will be a success for your organization.
After establishing your budget, the next step is to choose what tech stack you’ll build in.
Development firms tend to sort themselves by tech ecosystem, so choosing a particular stack has the side effect of narrowing your search right off the bat. This is a good thing. On the flip side, if you don’t choose a stack before you start the search process, you’ll be choosing from literally every firm in the world. That sounds stressful, doesn’t it?
You can start to narrow down your tech stack options with these factors:
The capabilities of your internal team. What your team already knows can and should influence what tech stack you choose, as well as how you build and maintain your application.
Even if you don’t have a full-blown development team, you might have people with some programming ability or understanding of the software development landscape. Ask around and see if anyone in the organization has a strong tech stack preference, and if so, explore that option. If you have an IT group, find out if anyone there has knowledge in a particular tech ecosystem. Any of the popular tech stack options are likely to work fine for your app, as long as you pick a good agency. In the end, your organization has to own the app, so allowing existing internal preferences to sway the decision makes sense.
If there’s no internal opinion, choose Laravel. If you end up having to hire staff to maintain your app, standing up a development team is more manageable in Laravel than in any other stack. Plus, Laravel is just plain awesome (yes, we’re biased).
Your ability (and funding) to maintain your app. After building the app, your team will be responsible for maintaining it, which means you may find yourself needing to hire one or more engineers. Though elite developers in every stack are hard to attract and retain, the ubiquity of PHP and the massive popularity of WordPress have created a “minor league” for Laravel development. This means there’s a bigger pool of talent to hire from. PHP developers are also somewhat less expensive to employ than developers in the other popular backend ecosystems, further bolstering the case for Laravel.
For reference, here are the average salaries of mid-level developers across popular tech stacks:
If hiring is out of the picture, there are other options for bolstering your team with dev experts that can lead and help maintain your web app, like our embedded development teams.
Last but not least, you should spend some time figuring out which features of your product are a “must have” and which you could potentially live without. Relegate anything you can live without to the “nice to have” pile. The more features you push for, the longer development will take, and the more it will cost.
For each feature you can’t live without, take some time to consider how you might be able to scale back its fidelity rather than getting rid of it entirely. That way, when things end up taking longer than you’d hoped, you’ve got a Plan B ready to go.
For example: your app requires a calendar. It’s a must-have feature. But does it have to be an interactive calendar with automated notifications and an interface that allows external users to book meetings? Maybe not. Maybe just a visual calendar that lists events over time is perfectly functional for your users.
When it’s time to start planning your web app, it’s easy to get overwhelmed. The search process alone can seem like the starting line of a marathon when you haven’t been on a run for months.
But considering each important factor will give your team direction. From there, you’ll have the ability to start searching for a partner to help you build an app that meets your needs, serves your users, and extends your mission.
]]>Now, imagine something even better: this programmer is instead a tool in your application that could do the same thing: conform your application code to meet your standards, upgrade it to support newer versions of PHP, and even identify and fix other potential issues that you may not even be aware of. Sounds pretty magical, right?
Well, that tool exists, and it's called Rector. Rector is a powerful tool that helps developers maintain consistency and improve the quality of their codebase. It does this by automatically refactoring code based on predefined rules. Developers can ensure that their code adheres to a specific set of coding standards and conventions, which makes it easier to understand, maintain, and evolve over time.
Rector can also help upgrade your codebase to a new framework, language, or library version. It can handle the most time-consuming and error-prone tasks and simplify the upgrade process.
Overall, Rector is a valuable tool for any developer looking to improve the quality and consistency of their codebase. With its ability to analyze code statically and make changes without breaking anything, Rector can help developers save time and effort while improving the overall quality of their code.
Let's dive in.
You can add Rector to your project using Composer:
composer require rector/rector --dev
There's also a community-created Rector extension for Laravel, which I maintain:
composer require driftingly/rector-laravel --dev
Once you've installed Rector, you'll want to create a configuration file:
vendor/bin/rector init
The init
command creates a file called rector.php
in your project's root directory. This file is where we'll add and configure the rules that we want Rector to follow. A rule is a PHP class used to find and transform code.
Let's start with a simple single-rule configuration for a Laravel application. Laravel 9 introduced a new to_route
helper. We can tell Rector we want to use to_route
instead of redirect()->route()
or Redirect::route()
using RedirectRouteToToRouteHelperRector
.
Replace the contents of your rector.php
file with:
<?php declare(strict_types=1); use Rector\Config\RectorConfig;use RectorLaravel\Rector\MethodCall\RedirectRouteToToRouteHelperRector; return static function (RectorConfig $rectorConfig): void { $rectorConfig->paths([ __DIR__ . '/app', __DIR__ . '/config', __DIR__ . '/database', __DIR__ . '/public', __DIR__ . '/resources', __DIR__ . '/routes', __DIR__ . '/tests', ]); $rectorConfig->rule(RedirectRouteToToRouteHelperRector::class);};
Now that we've told Rector what to look for, we can run it on our codebase:
# Preview changesvendor/bin/rector --dry-run # Runvendor/bin/rector
If your codebase uses either redirect()->route()
or Redirect::route()
you will see changes like this:
- return redirect()->route('home')->with('error', 'Incorrect Details.')+ return to_route('home')->with('error', 'Incorrect Details.')- return Redirect::route('home')->with('error', 'Incorrect Details.')+ return to_route('home')->with('error', 'Incorrect Details.')
Let's add some more rules to our rector.php
config file to see what else Rector can do.
Sets bundle rules together and can be added using $rectorConfig->sets()
. If we wanted to upgrade our code to be compatible with PHP 8.1 we would use the following:
$rectorConfig->sets([ LevelSetList::UP_TO_PHP_81,]);
Two other common sets are SetList::DEAD_CODE
(which removes code that doesn't have any effect) and SetList::CODE_QUALITY
(which fixes some common code quality issues). After adding these, our rector.php
file now looks like this:
<?php declare(strict_types=1); use Rector\Config\RectorConfig;use RectorLaravel\Rector\MethodCall\RedirectRouteToToRouteHelperRector;use Rector\Set\ValueObject\LevelSetList;use Rector\Set\ValueObject\SetList; return static function (RectorConfig $rectorConfig): void { $rectorConfig->paths([ __DIR__ . '/app', __DIR__ . '/config', __DIR__ . '/database', __DIR__ . '/public', __DIR__ . '/resources', __DIR__ . '/routes', __DIR__ . '/tests', ]); $rectorConfig->sets([ SetList::DEAD_CODE, SetList::CODE_QUALITY, LevelSetList::UP_TO_PHP_81, ]); $rectorConfig->rule(RedirectRouteToToRouteHelperRector::class);};
By adding only a few lines to our rector.php
file, we've improved code consistency an incredible amount. These three sets include over 150 individual rules. If you don't like a particular rule or need a rule skipped for a specific file or directory, you can update your config with a call to $rectorConfig->skip()
:
$rectorConfig->skip([ RecastingRemovalRector::class,]);
To view all the rules and sets, you can check out the Rector Documentation. Rector even includes a bunch of Laravel-specific rules.
So far, we've run Rector manually from the command line, but we can set it and forget it by adding it to our CI pipeline. My preferred way is through GitHub Actions.
Create a file in ./github/workflows
called rector.yaml
with the following:
# Inspiration https://github.com/symplify/symplify/blob/main/.github/workflows/rector.yamlname: Rector on: pull_request: null jobs: rector: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 with: token: "${{ secrets.ACCESS_TOKEN || secrets.GITHUB_TOKEN }}" - uses: shivammathur/setup-php@v2 with: php-version: 8.1 - uses: "ramsey/composer-install@v2" - run: vendor/bin/rector --ansi - uses: EndBug/add-and-commit@v5.1.0 with: add: . message: "[ci-review] Rector Rectify" author_name: "GitHub Action" author_email: "action@github.com" env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
When a pull request is created or updated, this action will run and commit any changes.
This automation is especially useful when a new developer joins the project. Their pull requests will be updated automatically before the first code review.
Another common way to automate these tasks is with a Husky pre-commit hook.
Hopefully, I've demonstrated how Rector can help maintain consistency in your projects.
Here are a few other ways you might use Rector.
For more information check out:
But as our opinions about how to write code—especially in Laravel—have grown, it’s been harder and harder to find the right tooling to enforce all of our code styles. Finally, we decided it was time to create our own tool for it. We’ve taken the best of all the other tools out there, thrown in a little of our own Laravel-specific magic, and it’s finally ready for release: introducing Duster.
Duster is an opinionated linter and fixer for Laravel code. It’s not a linting or fixing tool of its own, but rather a parent tool that takes the best of Laravel’s Pint, together with the power of PHP_CodeSniffer and PHP-CS-Fixer configured the Tighten way, sprinkles in a little bit of Tighten’s special Laravel-specific lints through Tlint, and packages it all up to create one easy-to-use, powerful, Tighten-flavored code style tool.
Because Duster is a linter and a fixer at the same time, you can use it to tell you when your code is out of sync with Duster's styles, or you can also use it to fix those issues. Duster works on the command line, but you can also integrate it with Husky to run it automatically in response to local Git triggers, or use our premade GitHub Action to run it in your CI pipeline.
Duster installs and configures four tools for linting and fixing: Laravel Pint, PHP_Codesniffer, PHP-CS-Fixer, and Tighten’s Tlint. Duster configures each tool using Tighten’s presets. We’re using Pint as the basis of the styles, since that’s the Laravel preferred tool, and then using the other three tools to layer on more lints/fixes to cover even more ground.
You can install the package via Composer:
composer require tightenco/duster --dev
Duster can be run in one of two modes: lint or fix. Linting will report issues but won’t make any changes to your codebase, and fixing will report and fix issues found.
To lint everything:
./vendor/bin/duster lint
To fix everything:
./vendor/bin/duster fix
To run Duster only on files that have uncommitted changes according to Git, you can use the --dirty
option:
./vendor/bin/duster lint --dirty#or./vendor/bin/duster fix --dirty
Duster comes with Tighten’s preferred styles defined out of the box, but it’s also made to be configurable with the duster.json
file.
You can add and remove tools, change the order in which tools run, and use local configurations for each tool to override Duster’s defaults. You can also customize the paths you’re running Duster on and even add other scripts—for example, you can add in a tool like PHPStan—as a part of your Duster stack.
Check out the customizations section of Duster’s documentation for examples.
If you want to run Duster as a GitHub Action, you can publish a GitHub Actions workflow:
./vendor/bin/duster github-actions
You have the option to fail the workflow if any issues are found, or auto-commit the fixes to the codebase, depending on what you prefer.
You can use Husky to automatically run Duster on your local changed files before every commit.
To install Husky and dependencies:
npx husky-init && npm install lint-staged --save-dev
Update Husky’s pre-commit
.
npx husky add ./.husky/pre-commit 'npx --no-install lint-staged'
Open the new pre-commit
file and remove the npm test
line.
Then configure lint-staged
by updating your package.json
.
Here we tell lint-staged
to run duster for all *.php
files:
{ ... "lint-staged": { "**/*.php*": [ "vendor/bin/duster lint" ] } ...}
You can also use Husky to lint/fix other file types using tools such as stylelint or prettier.
If you want a comprehensive code style linter and/or fixer for your Laravel applications, Duster is a robust, configurable tool that brings together the best existing tools out there. We hope you love it.
]]>In an ideal world, your team would work at a sustainable pace, methodically producing and deploying thorough, maintainable, tested code. They would calmly hit every milestone while keeping your software ecosystem completely free of technical debt.
But we don’t live in that world.
Even if you’ve been finding time in between the big tasks for minor refactors, dependency updates, and security patches, the encroaching reality of technical debt has a way of eventually catching up with you. When the pile of debt finally gets too large to ignore, you know it’s time to stop, take stock, and pay down the debt before your development process grinds to a halt.
But how do you get your boss (or your boss’s boss, the board of directors, or an investor group) to approve the necessary resources?
Paying down technical debt requires slowing down or pausing your team’s regular work stream. When your biggest customer is asking for new features, it can be hard to tell them they’ll have to wait. Slowing down is good for code quality, but bad for the timelines and budgets that make leadership look good.
When you prioritize speed over quality, ultimately, you pay in the form of technical debt. And left long enough, the fallout from technical debt will cause more pain than the proactive rework will.
Use our self-assessment to understand your technical debt: is it a pressing issue to address or can it wait?
Download NowLeaders on the business side can be uncompromising in their mission to go faster no matter the cost. You’ll likely hear a variety of arguments from your boss about why there’s no way to pay down technical debt:
There will always be a reason to postpone. Leadership wants speed, but they also don’t want things breaking. And left unchecked, technical debt will, without fail, cause your codebase to break at some point. Luckily, there are tactful arguments that will convince your boss to see it from your perspective—no matter what type of leadership style they have.
As a software developer, you’ve got a keen eye for details and you’re observant. So you likely have a good read on your boss. You might not know everything about them, but we bet you know enough to craft the most compelling argument to fund technical debt paydown.
Depending on their leadership style, work habits, and general level of fear and anxiety, there’s an argument for paying down technical debt that will resonate.
Is your boss level-headed, logical, and generally easy to work with as long as you have a good reason for choices? Then you should employ the low-hanging fruit argument.
Find a tangential task — like a minor redesign of a feature — that touches the code you need to fix. Frame your argument like it’s an inconvenience not to do it. “While I’m tinkering around in there, I might as well take a bit of extra time just to fix it up. It’s irresponsible not to!”
This argument appeals to your boss because it’s framed as a convenient effort with no real downsides.
Is your boss chipper even before they’ve had caffeine? If you’d characterize them as a glass-half-full optimist, then persuade them with a vision of a spotless codebase in the future.
The carrot argument is for those with a rosy outlook — “If we take the time to get the house in order now, it will be in good shape for a long time. Doing the rework now avoids risks, and might even save us some change on the AWS bill!”
This argument works because it focuses on the benefits of rework, and makes a case for what your company as a whole can gain from cleaner code.
If your boss can’t be swayed by anything other than the hard facts, then the stick argument is for you. The polar opposite of a carrot, this argument focuses on the worst case scenario. What will go wrong if you never pay down your technical debt?
This argument works because you’ll weaponize their cynicism against them. “Every time someone comes to you with a new feature request, we assume more risk. Eventually, entropy will take over and we won’t be able to push any code. Do you want to explain THAT to the board?”
Bonus points if you include a timeline to go with your estimated date of failure … and maybe start a pool with the other developers.
For the worry-wart, doom-and-gloom boss who always fears the worst possible outcome, lead with the most unlikely yet scariest possible outcome — “What if the developer who holds the keys to the code gets hit by a bus tomorrow?”
It’s a bad situation to be in. If one of your developers was told to “go fast” they might have gotten it done. But if they aren’t there to explain their work, it might be impossible to fix. So encourage your boss to seize the opportunity to fix it while you can!
Nearly every application, team, and company accrues technical debt over time. It’s a natural, unavoidable, and sometimes even healthy byproduct of a focused and productive team.
Think about it like this: If you have a friend with one million dollars under their mattress, they are missing some ripe opportunities. Sometimes you need to take on technical debt to get to a minimum viable product, or work toward a specific deadline, after which you have planned time for cleanup.
Working with leadership to develop criteria on when it’s ok to assume technical debt and when you need to start paying it off is the first step.
Then, the developers need to create a routine that keeps the codebase clean. For some, it’s incorporating a week into a sprint — like a week 6 if you run 5-week sprints — where the main focus of the whole team is debt repayment.
If you have the kind of team where everyone is working in the same monolith, you can stagger developers to achieve a real-time debt payoff cadence. Structuring the workload so one engineer is always on debt cleanup duty can be a good option.
At a certain point, technical debt becomes like building a bridge. Pushing more features without resolving the bugs is like paving the road before finishing the supports. Eventually, it’s all going to collapse. Instead, get yourself and your team to a point of scalability. Then talk openly about the technical debt and address it in a proactive way, to avoid heading back to the battlefield against your boss.
]]>For example, if a person operating a drill stops pressing the trigger for any reason, the drill stops. In this case, the drill trigger serves as a kill switch; if the operator becomes incapacitated, the drill will instantly stop (or be “killed”).
These types of safeguards aren’t limited to physical applications. We can use the concept of kill switches in software to keep our applications running smoothly even when something goes wrong.
Let’s take a look at ways we can use this kill switch concept in our Laravel apps.
When an app has a long-running process, we need a way to set a timeout for the process in case it runs too long; our timeout allows us to clean up resources and allow users to try again.
I adapted this example from the Laravel Cloud codebase. We’re essentially creating a “deployment”; imagine this as your Laravel Forge server running your deployment scripts to push the latest version of your application.
Here’s a breakdown of the process:
class Deployment extends Model{ // ... public function build() { BuildDeployment::dispatch($this); // This is a Kill Switch job! TimeoutDeploymentIfStillRunning::dispatch($this)->delay( now()->addMinutes(40), ); }}
The TimeoutDeploymentIfStillRunning
job is a kill switch job! Be aware that some queue providers have a limit on how far in the future we can dispatch a delayed job. For example, with Amazon SQS, we can only delay a job up to 15 minutes.
Here’s an example of what the TimeoutDeploymentIfStillRunning
job might look like:
class TimeoutDeploymentIfStillRunning{ public function __construct(public Deployment $deployment) { // } public function handle() { if ($this->deployment->hasEnded()) { return; } $this->deployment->markAsTimedOut(); }}
This example shows us how a kill switch can be used while monitoring job deployments.
Next, let’s see how we can use external kill switch services to help us monitor scheduled tasks.
Web apps often rely on a Scheduler to run time-based tasks. Most servers use Cron, and Laravel has a built-in Task Scheduling component that simplifies this setup.
However, setting up the Scheduler in a server’s crontab is only part of the story. How can we ensure our scheduled tasks are running in our pre-defined schedule? We need something that will ring alarms whenever it doesn’t hear back from our Scheduler. We need a kill switch mechanism!
Laravel Envoyer has this feature as a service called Heartbeats. With Envoyer, we can set up a Heartbeat for the entire Scheduler at the crontab using curl
. If our crontab dies for any reason or the server goes down, Envoyer won’t receive the Heartbeat and will notify us that something’s wrong:
* * * * * forge php artisan schedule:run && curl http://beats.envoyer.io/heartbeat-id
Note that Envoyer’s minimum interval is 10 minutes, so if the server is down, we’ll get notified after 10 minutes. If, for instance, the Scheduler is supposed to run every minute, we would be notified after the Scheduler should have already run ten times.
We can also have one Heartbeat for each scheduled task. The Laravel Scheduler has a built-in thenPing()
method we can use to ping our Heartbeat whenever it fires that specific task:
$schedule->command('checks:trigger') ->everyMinute() ->thenPing('http://beats.envoyer.io/heartbeat-id');
With Envoyer’s Heartbeats, we have alarms at the infrastructure level to notify us when something goes wrong with our Scheduler.
Background Jobs are another common piece of infrastructure in modern web apps. Laravel has a Queue component out of the box to handle time-intensive processes that are too long for standard web requests.
In my colleague Jamison Valenta’s great post “Are Your Queue Workers ... Working?”, Jamison walks us through a queued job called QueueHeartbeat
. The Scheduler dispatches QueueHeartbeat
, and inside the job, an Http::get()
call pings the Heartbeat URL.
If our queue workers are not running, the app won’t process that queued job, so Envoyer won’t hear from it and will notify us. In this case, QueueHeartbeat
is a kill switch mechanism. I recommend checking out Jamison’s post to learn more about this approach to queued jobs.
So far, the examples we’ve discussed have involved deactivating a process or sending a notification when something goes wrong, however, kill switch mechanisms can also activate processes.
The following example comes from a project I worked on a while ago—a delivery app that connects customers with delivery motorcyclists.
Here’s how we could implement this:
class DeliveryRequest extends Model{ public function biker(): BelongsTo { return $this->belongsTo(Biker::class); } public function startBikerMatchFinder() { $this->markAsFindingNearbyBikers(); NotifyBikers::dispatch($this); // This is a Kill Switch. IncreaseAreaOfBikerMatchIfNoMatch::dispatch($this)->delay( now()->addSeconds(30), ); // This is also a Kill Switch. TimeoutDeliveryRequestBikerFinderIfNoMatch::dispatch($this)->delay( now()->addMinutes(3), ); }}
Note that the TimeoutDeliveryRequestBikerFinderIfNoMatch
job is similar to the TimeoutDeploymentIfStillRunning
job in our previous deployment example.
NotifyBikers
job, we mark the DeliveryRequest
status as finding_nearby_biker
.DeliveryRequest
has either been claimed or timed out.IncreaseAreaOfBikerMatchIfNoMatch
only updates the status of the DeliveryRequest
to finding_all_bikers
if a biker hasn’t claimed the request before it times out:
class IncreaseAreaOfBikerMatchIfNoMatch{ public function __construct(public DeliveryRequest $deliveryRequest) { // } public function handle() { if ($this->deliveryRequest->hasEndedMatching()) { return; } $this->deliveryRequest->markAsFindingAllBikers(); }}
class NotifyBikers{ public function __construct(public DeliveryRequest $deliveryRequest) { // } public function handle() { if ($this->deliveryRequest->hasEndedMatching()) { return; } Biker::query() ->available() ->withinRegion($this->deliveryRequest->region()) ->chunkById(100, function ($bikers) { Notification::send($bikers, new NewDeliveryRequest($this->deliveryRequest)); }); $this->release(10); }}
class DeliveryRequest extends Model{ protected $casts = [ 'status' => DeliveryRequestStatus::class, ]; public function region() { return $this->status->regionFor($this); }}
enum DeliveryRequestStatus: string{ case FINDING_NEARBY_BIKERS = 'finding_nearby_bikers'; case FINDING_ALL_BIKERS = 'finding_all_bikers'; public function regionFor(DeliveryRequest $deliveryRequest) { return match ($this) { static::FINDING_NEARBY_BIKERS => $deliveryRequest->coordinatesForNearbyBikers(), default => $deliveryRequest->coordinatesForAllBikers(), }; }}
The DeliveryRequest::region()
method returns region coordinates based on the current status of the DeliveryRequest
, either calling coordinatesForNearbyBikers()
or coordinatesForAllBikers()
.
This delivery app example shows us how we can apply our kill switch concept to activate a wider search for bikers.
As we’ve seen in these examples, the kill switch mechanism may take many forms and shapes, but the idea is simple: Have a process that will activate or deactivate a routine whenever it doesn’t hear back from the application to keep things running smoothly.
Have you used these or any other forms of kill switches in your apps? Let us know on Twitter at @tightenco!
]]>In this post, we’ll explore four possible validation solutions for validating data on a request that wasn’t provided by the user.
Imagine we have a shopping app with a list of products. Our products
table has an is_featured
column, and any product that’s featured on the home page will have that column set to true
.
In our app, we want to make sure no one can ever delete a featured product. Let’s imagine this controller method can be called with a DELETE
call to a URL like http://ourapp.com/products/14
.
// ProductController.phppublic function destroy(Product $product){ // Perform some validation here to make sure the product isn't featured $product->delete();}
What does it look like to ensure this product isn’t deleted if it’s featured? At first glance, we might assume we can’t use Laravel’s native validation tooling, since the data we’re checking against (the product, which is passed as a part of the URL) isn’t technically user input.
Let’s take a look at a few solutions below.
authorize()
method)Note: While this is the most common solution I see folks reaching for in these types of situations, I personally believe it’s not the best option. Read on to my final paragraph of this option to see why.
We can try using Laravel’s authorization tooling; there’s a method, $this->authorize()
, available in controllers that checks against that object’s policy to see if this action is permitted.
In order to use $this->authorize()
, we’d use Laravel’s policies to define permissions for deleting a product. If we already have a ProductPolicy
in place, we can modify the delete()
method to check whether the given product is featured:
// ProductPolicy.phppublic function delete(User $user, Product $product){ return ! $product->is_featured;}
There are a few ways to attach Laravel’s authorization tooling to a route, including the can
middleware, which is my preference. However, for this example, let’s keep it in the controller and use the authorize()
method:
// ProductController.phppublic function destroy(Product $product){ $this->authorize('delete', $product); $product->delete();}
Here’s the downside of this solution, and any other solutions using authorization: a 403 Forbidden
response suggests the reason the user can’t delete this product is because they’re not authorized to do so. But this isn’t an authorization issue; it’s a validation issue. You may be authorized to delete products, but you’ve made an invalid request, which I think merits a different response.
abort()
and abort_if()
HelpersSince we’re dealing with validation, not authorization, let’s find a better response code. I’d probably use 422 Unprocessable Entity
, which is the status code Laravel throws when a JSON request fails validation.
In this case, we can use the abort_if
helper to check the product’s status inline return a 422
status code if it’s invalid.
// ProductController.phppublic function destroy(Product $product){ abort_if($product->is_featured, 422); $product->delete();}
This is a good start. However, we’re missing out on a lot of what Laravel provides when using built-in validation.
When validation fails using Laravel’s native methods, Laravel throws a ValidationException
with errors attached to it. The exception handler class (Illuminate\Foundation\Exceptions\Handler
) catches these exceptions and then converts them to either a JSON response or a redirect, depending on the request type. We don’t get any of this with our abort_if()
call.
Since we know throwing a ValidationException
would allow us to generate a more robust error, we can manually throw one in the controller:
// ProductController.phppublic function destroy(Product $product){ throw_if($product->is_featured, ValidationException::withMessages([ 'product' => ['Featured products cannot be deleted.'], ])); $product->delete();}
Here we’re using Laravel’s throw_if
helper function to throw the ValidationException
when the first parameter (product is featured) is true. The withMessages
static constructor method provides a convenient way to add our custom error message to the product
key.
These throw_if
and abort_if
solutions are probably the cleanest options for a simple boolean check; however, what if the validation logic is complicated enough to warrant extracting this code out of the controller? Let’s explore another option for this below.
Laravel’s form requests are a perfect tool for extracting validation logic into a dedicated class. Each form request has a rules()
method, which allows us to define how to validate each piece of user input.
Since the examples we’re covering here aren’t validating user input, we might assume the rules()
method, which traditionally pulls only the user data (using request()->all()
), would be out of the question.
However, it turns out it’s possible to override the data a form request is using for its validation, and we can add our own data in and then use rules()
to validate that data.
Let’s imagine we’ve created a DeleteProductFormRequest
. We want to modify this request so it uses our own array for the data it’s validating, which we can define in the validationData()
method:
// DeleteProductFormRequest.phppublic function validationData(){ return [ 'product' => $this->route('product'), ];}
Note:
$this->route('product')
gives us theproduct
variable defined in the route and instantiated as an Eloquent object by route model binding. However, if this request wasn’t using route model binding, we could instead look up the product in thevalidationData()
method or its partner methodprepareForValidation()
, using its ID pulled from$this->request('product_id')
or something similar.
Now that we have a product
to validate, we can add our validation to the rules
method; since this is not normal validation use case, we need to build a custom rule for it, passed as a closure:
// DeleteProductFormRequest.phppublic function rules(){ return [ 'product' => [ function ($attribute, $value, $fail) { if ($value->is_featured) { $fail('Featured products cannot be deleted.'); } }, ], ];}
There you have it: one not-so-great and three great solutions for the next time you need to validate a request based on something other than user input.
Please, if you remember nothing else, remember this: there’s a difference between authorization and validation. Authorization—for example, using $this->authorize()
—sends a confusing message to the consumers of this route, suggesting the wrong reason it’s being rejected.
If we set our brains to understand that we’re instead validating whether this is a valid request, we see examples in Laravel’s existing tooling that guide us toward a better way to handle this issue, just making minor tweaks to the existing validation ecosystem to allow it to handle non-user-provided data.
]]>Not necessarily.
While staff augmentation promises a quick fix to your output problems, the reality is often more complex. The truth is, companies that reach for this arrangement often end up spinning their wheels on feature development, stifling the progress of their team members’ development and, ultimately, wasting their money.
Luckily, there’s a better way to add temporary capacity to your team: an embedded team.
How is an embedded team better? We’ll get to that. But first, let’s take a look at staff augmentation and some of its pitfalls.
Staff augmentation is the process of bolstering your software team with temporary developers on an as-needed basis. Usually this is accomplished through a company that provides staffing services in exchange for a markup on each individual’s hourly rate. One or more temporary contractors join your team either remotely or on-site for an agreed-upon amount of time. Like your own team, you have the responsibility for tasking and managing them, and you’re charged for their time, as opposed to their output or the value they deliver. You sometimes have the option to extend their contract when it expires, but not always.
This arrangement offers the promise of an easy surge in development capacity without the hassle and risk of hiring full-time, but staff augmentation fails to consider the complexities of development projects. Because of this, adding more “butts in seats”—even relatively skilled “butts”—doesn’t mean your project will suddenly become more successful.
If you add two people to a four-person dev team tomorrow, their project should proceed 50% faster, right? Probably not. As Fred Brooks discusses in his book The Mythical Man-Month, adding programmers to a behind-schedule project often makes the project even more behind schedule.
There’s even a name for this phenomenon: “Brooks’ Law.” It states that, “adding [person-power] to a late software project makes it later.” Brooks states that, because adding new people involves costly ramp-up time and increases communication overhead, the effect is net-negative on software team output.
Brooks knew in 1975 that throwing bodies at your software output problem doesn’t usually work, and yet there’s a billion-dollar industry that purports to do just that. Simply adding more people to a project is unlikely to help. However, adding the right people, those who can avoid the ramp-up and manage themselves effectively, can make your project better.
In seeking to be the one true solution to increasing your team’s output, staff augmentation looks at every problem the same way. If you need to get more done, staff augmentation says “we have developers who can do x … how many would you like, and for how long?”
But we don’t have to tell you that not all developers have the same expertise, experience, communication skills, and problem-solving capabilities. Developers aren’t fungible resources to be shuffled around and hot-swapped at a moment’s notice, but that’s exactly how staff augmentation treats them.
It offers a one-size-fits-all solution without investigating your specific context or addressing the reasons your project was struggling in the first place. If you’re lucky, your staffed developers will stay in their lane and write adequate code. If you’re not, your team might end up paying lots of money for the perverse privilege of going even slower.
When a new developer shows up to work with your team, they likely won’t know anyone other than the person(s) who interviewed them. As it turns out, people contain multitudes—you can be shy, introverted, egotistical, and/or talkative, depending on any number of situational factors. This means very few people (even from the general population, much less the subset of people who are programmers) are comfortable asking a total stranger a bunch of questions that might make them look dumb on day one.
So there’s going to be a learning curve where your new teammate has to get up to speed. Yes, your new staff aug’d developer has access to all the wonderful resources the Internet has to offer, just like anyone else. What they don’t have is the interpersonal relationships the rest of your team has, so when they get stuck on a programming problem, they are just another lonely person with a browser, googling around for answers (but mostly finding cat videos).
Meanwhile, the rest of your team is collaborating and commiserating and laughing at cat videos with each other and wondering what to make of the new, weird person in the corner you’ve asked them to work with all of a sudden.
In its valiant attempt to turn engineering effort into a liquid resource, staff augmentation can end up masking the more complex problems that are hindering your team’s progress.
The option to reach for staff augmentation every time things aren’t going as fast as you want allows engineering leaders to avoid dealing with their team’s deeper issues. When output is lower than you expect, you may think it’s because your team is too small or not skilled enough. But adding a couple of anonymous, temporary developers won’t fix your poor code hygiene, a busted deployment pipeline, or management shortcomings.
The crutch of staff augmentation can also cause engineers and managers to be less rigorous in avoiding technical debt (how many times have you said “we’ll clean it up right after this big push”), borrowing from future productivity, which of course only delays the inevitable reckoning. Fast-forward a few months, and you may wind up drowning in rework on features you shipped long ago. Time to push the bail-out button again!
Staff augmentation allows a software team to increase the number of people available to get work done on a given project. Good contract developers will usually be capable of receiving instructions, pulling tickets, and writing decent code. So despite all the potential pitfalls, if it goes well, staff augmentation can have an additive effect on a team’s capacity.
An embedded team addresses the same set of problems as staff augmentation but does so much more effectively and with significant additional benefits that go far beyond increasing speed and capacity.
Let’s take a closer look at some of the benefits an embedded team offers.
Embedded teams are carefully vetted and work within a tight technical niche (in Tighten’s case, Laravel, Vue.js, and related technologies). The best agencies only work with groups that need the specific, deep expertise they have to offer, so if you find that right-fit agency for your team, their embedded team is highly likely to deliver good value. They have what you need and a ton of it.
An embedded team is “embedded” in two different ways. First, within your team and project, and second, within the agency that employs them. At Tighten, our team members work on a single client project at a time, and they do so while situated in the Tighten Slack workspace with 20 other experts in the same stack. When they run into a tough problem, they don’t just go Googling around. They put the problem to the group of experts they are immersed in, leveraging its real-time, collective expertise on their clients’ behalf. You add two developers, but you gain the capacity of a 20-person expert team when you run into roadblocks and bottlenecks.
An embedded team always leverages its talent collaboratively. To solve Brooks’ Law, where more people actually cause more delays, embedded teams are structured intentionally, with two developers and a fractional project manager forming the atomic team unit.
Beyond the obvious administrative benefit of having a project manager, developers need long stretches of deep, uninterrupted time to do their best work, and the cost of context-switching is very high. When a developer gets pulled away from their work for 5 minutes to handle an organizational task, that switch costs far more than 5 minutes of productivity. It could take as much as an hour for that developer to get back to the flow state they were in before the distraction.
Deploying as a team—two developers with a dedicated project manager—lets the team stay heads-down on the project, while the project manager handles the other equally important tasks that keep the whole project moving forward.
Embedded teams, equipped with deep experience, technical expertise, and an outsider’s fresh perspective, are trained not only to solve your development problems but to identify and improve the issues that hamper your team’s progress. With an embedded team, you not only level up your dev capacity, but you level up your team in a holistic way, gaining crucial insights into your codebase, processes, and even your team dynamic.
This can create lasting benefits you reap long after your embedded team has rolled off of your project. You might gain a new understanding of how to improve your development processes, how to improve your architecture, and how to create a more knowledgeable, fulfilled team.
Staff aug seems like the easy solution because the agency takes an oversimplified approach to providing dev talent. But at the end of the day, staff augmentation tries to solve your complex, nuanced codebase challenges with a standard-issue developer (or two or three) that might not have the deep skills you need.
Tighten’s method is designed to specifically avoid spending time and resources on a staffing situation that doesn’t actually make any progress (or worse, torpedoes the current project and sinks future opportunities into technical debt). When you need to add capacity with an embedded team, you get more than butts-in-seat. You get critical problem-solving brainpower. Learn more about how to deploy an embedded team to accelerate your software development process — without the traditional staff augmentation headaches.
]]>This post assumes basic familiarity with Laravel’s request lifecycle and middleware.
Creating and interacting with middleware is a common task for Laravel developers. You’re probably familiar with before and after middleware. Before middleware can be used to authenticate users, set the app language, or limit responses based on the request. After middleware can be used to add cookies or update response headers. In this post, we’ll look at a handy but less utilized type of middleware: Terminable middleware.
Terminable middleware run after your response is sent to the browser, making them uniquely useful for situations where you wish to run some code without blocking the response. For example, you could use terminable middleware for logging information about the request and the response, dispatching emails or notifications, or cleaning up temporary data. For more resource-intensive processes after the request, Laravel queues are a better solution.
The illustration above, based on a graphic from Laravel Up and Running, shows where before, after, and terminable middleware are processed during the request lifecycle.
Unlike other types of middleware, terminable middleware define a terminate()
method rather than handle()
. The example below demonstrates this method.
// app/Http/Middleware/MyMiddleware.php class MyMiddleware{ public function terminate($request, $response): void { // run some code }}
Apply your middleware globally or on a route/route group basis.
To apply globally, add your middleware class to the $middleware
array in the app/Http/Kernel.php
class.
// app/Http/Kernel.php protected $middleware = [ // ... \App\Http\Middleware\MyMiddleware::class,];
To apply on a route or route group, assign your middleware a key in the $routeMiddleware
array in the app/Http/Kernel.php
class. Then pass the assigned key you created to the middleware
method in your route file.
// app/Http/Kernel.php protected $routeMiddleware = [ // ... 'myMiddleware' => \App\Http\Middleware\MyMiddleware::class,];
// routes/web.php Route::get('/profile', function () { //})->middleware('myMiddleware');
Like other types of middleware, terminable middleware are executed by the HTTP kernel. Let’s take a look at Laravel’s public/index.php
file to see when they’re called:
// The `handle()` method, called from the app kernel, calls any// Before or After middleware defined in middleware classes.$response = $kernel->handle( $request = Request::capture())->send(); // The `terminate()` method calls any Terminable middleware// defined using a `terminate()` method.$kernel->terminate($request, $response);
This $kernel->terminate()
function later calls the terminateMiddleware()
method defined in the Illuminate\Foundation\Http\Kernel
class.
/** * Call the terminate method on any terminable middleware. * * @param \Illuminate\Http\Request $request * @param \Illuminate\Http\Response $response * @return void */protected function terminateMiddleware($request, $response){ $middlewares = $this->app->shouldSkipMiddleware() ? [] : array_merge( $this->gatherRouteMiddleware($request), $this->middleware ); foreach ($middlewares as $middleware) { if (! is_string($middleware)) { continue; } [$name] = $this->parseMiddleware($middleware); // creates a fresh instance of your middleware $instance = $this->app->make($name); if (method_exists($instance, 'terminate')) { $instance->terminate($request, $response); } }}
This method creates a fresh instance of your middleware. If you have a before or after middleware that is also terminable, and you need the same instance of the middleware class to be re-used before/after the request and for termination, you can bind your middleware to the container as a singleton. Check out the documentation for an example.
You may be wondering how terminable middleware differ from after middleware. After middleware execute before the response is returned to the browser. On the other hand, terminable middleware execute after the response is returned. When deciding which is most appropriate for your application, consider the following:
Your server needs to use FastCGI for your middleware’s terminate
process to run automatically.
Identifying if FastCGI is available on your server is beyond the scope of this post, but most servers use it. If you try it and it doesn’t work, check with your server administrator.
Running terminable middleware requires PHP’s fastcgi_finish_request()
function, which closes the connection to the client while keeping the PHP worker available to finish the request (i.e., execute the code in your terminate()
method).
Because of this, expensive processes like complex database queries or external requests can block workers from performing other required tasks and slow your app’s response time, or even cause gateway errors. Queue workers are better suited for these sorts of tasks.
To test terminable or any other middleware, ensure your middleware run during your test request. Make sure that your test class does not use the WithoutMiddleware
trait. Or, you can include middleware for an individual test by calling $this->withMiddleware();
at the top of your test method.
Now that we’ve covered the basics, you can play around with creating your own terminable middleware. Try logging a message or sending an email. If you’ve used terminable middleware before, we’d love to hear how you’ve implemented it in your apps. Send us a tweet @TightenCo!
]]>