{"pageProps":{"thoughts":[{"title":{"html":"
On defining your ways of working: turning ideas into value, deciding how to work together, nurturing a feedback culture and adapting your process to meet changing needs.
","plain":"On defining your ways of working: turning ideas into value, deciding how to work together, nurturing a feedback culture and adapting your process to meet changing needs."},"slug":"defining-your-ways-of-working","tags":["Software development","Software development ways of working"],"publishedAt":{"pretty":"28th May 2020","iso":"2020-05-28T00:00:00.000+00:00"},"featuredImage":"/media/flow-with-artefacts-and-ceremonies-social.png","content":{"html":"On defining your ways of working: turning ideas into value, deciding how to work together, nurturing a feedback culture and adapting your process to meet changing needs.
\nSoftware development teams are able to adapt their process to meet the problems they face. In a rapidly changing digital world, the ability to learn and adapt is a competitive advantage. Teams must be enabled to define their ways of working and continue to learn and adapt as they go.
\nWays of working go way beyond just picking a software development process such as Scrum, Kanban, or (hopefully not) SAFe. Frameworks are never complete, except may be in the case of SAFe, which in my opinion is completely bonkers.
\nIn a lot of ways, you need to forget about wholesale dropping in an entire process that was likely designed for very different needs than yours.
\nAs a self-organising team you need to have a shared understanding of your ways of working. You must come to that understanding together.
\nYou may be working in an organisation that has established processes. You may not be the first software development team to form. Your team may have even worked together before. All the same, your team will have a new purpose and mission, solving a new problem, and therefore it is healthy to explicitly define your process together.
\nWhat you decide now isn't set in stone. In fact, as you work towards your mission you will have to learn and adapt – your process should evolve as you do.
\nStart by defining your flow, or as you may know it, product development lifecycle. You can do this by asking your team four questions:
\nLet’s think about these questions and how you can work with your team to define your ways of working. You can do this when first forming your team or if your team has yet to define your ways of working together.
\nThe fundamentals of software development are simple: as a team you convert opportunities and ideas into value through delivering working software into the hands of users. Some teams call this flow a product development lifecycle. Some teams call it by the name of a particular agile framework like Scrum. What you call it isn't important, but having a shared understanding of how it works is critical.
\nIt may well be that your team chooses Scrum as their agile delivery framework of choice. Unfortunately Scrum was created by developers and they forgot altogether to include instructions on how to incorporate design in the Scrum development process. Whoops.
\nThe lack of design process in such frameworks then pushes design back into a waterfall "design up front territory" and is often disguised as a user-first discovery. Instead, you want designers and developers working hand-in-hand, working together to deliver valuable, usable and feasible software.
\nTherefore your team can start with this or that framework but you must ultimately define a flow that works for your team, probably composing a number of different methodologies. This is what some people refer to as the idea of “being agile” rather than “doing agile”. There is no magic process, the magic is in your team's ability to adapt to their environment.
\nTo create a shared understanding as a team you can map out your flow, artefacts and ceremonies on a whiteboard.
\nAs a team you need to understand how you are validating ideas and opportunities to ensure they deliver valuable, usable and feasible software. You need a way of turning those ideas into deliverable units of work. Once released, you must make sure to validate your software is having the desired impact. This is your flow.
\nYour team should be clear on the stages of your flow and what enables them to progress work from one stage to another. Your flow should be simple enough to be drawn on a whiteboard in a minute or two and I'd recommend doing this whenever you review your ways of working.
\nArtefacts are what get your product owners and delivery managers excited. Popular examples include: product roadmaps, product backlogs, user story maps, sprint backlogs, Kanban boards, etc. I warn you now, many hours have been lost in perfecting artefacts which for the most part are ephemeral and thrown away.
\nThe importance of artefacts is in their ability to be used as communication tools by your team and wider stakeholders. Artefacts don't work without shared understanding, you can't just show your boss your product backlog and expect them to understand it. Artefacts have to be used and changed communally.
\nArtefacts help you communicate the stages of your flow and the work currently in those stages. As a team, you need to agree what artefacts you wish to use and how they relate to each stage of your flow. Try adding them to your flow on the whiteboard.
\nTeams use ceremonies to communicate and to evolve their shared understanding. Ceremonies can be used to celebrate successes and learn from mistakes, and ceremonies are the places where artefacts can be changed communally.
\nCommon ceremonies include: backlog planning, backlog refinement, stand ups, stand downs, show and tells, showcases, and retrospectives.
\nYou can definitely overload your team with too many ceremonies. No one likes being stuck in a day's worth of ceremonies every week or two, no matter how much you like your colleagues. Start with a minimum and make sure you timebox them, it should be exceptional for a ceremony to run longer than an hour, and your stand-ups shouldn’t turn into sit-downs.
\nAs a team, you need to be clear on what ceremonies you wish to use, when they should occur, and how they relate to each stage of your flow. Finish the sketch of your flow by adding ceremonies to it.
\nIf you're not sure where to start, I recommend starting with a framework such as Scrum or Kanban. They provide a flow, a set of artefacts, and ceremonies though the latter Scrum calls events. I know I said you need to forget about wholesale dropping in frameworks, and you should, you can’t just expect them to work. They take time to adopt and adapt to your teams particular needs.
\nWhat you won't get from Scrum or Kanban is how your team should work together to validate ideas, develop software or prove that your software is meeting the needs of your organisation and users. Don't worry, we'll get onto that next.
\nSo far we've focussed on processes and tools. If you've seen the agile manifesto you may now be feeling bad. After all:
\n\n\n\n
Individuals and interactions over processes and tools
Manifesto for Agile Software Development
Well, I wouldn't feel too bad if I were you. It's not like the "Agile Manifesto" is saying you shouldn't have tools and processes. In fact, it's easier to talk about individuals and interactions with the backdrop of an easy to understand process.
\nIndividuals and interactions are however more important than processes and tools. The quality and impact of your work is dependent on a team that works well together. You are solving problems, and how you do that may have to change, you can't just blindly follow a process and hope for the best.
\nTogether you need to agree on your shared values that underpin your work. You will need to define roles and responsibilities that describe what tasks and activities need to be performed by the team. You should also define core practices you will use to complete your work.
\nValues are the shared attributes that you expect and encourage from each of your team members. Your values are the foundation of how you will work together as a team.
\nThe Agile Manifesto has four values that are best summarised as: human-centred, working software, communication and responsiveness. Extreme Programming (XP) has five: communication, simplicity, feedback, courage, respect. Why doesn't anyone ever add “fun” to the list?
\nI’d suggest an exercise where each team member spends some time individually coming up with values that they feel are important and align with your team's purpose and mission. You can then compare these together and arrive at 3-7 values you all agree on.
\nIt's important to remember that you're forming a multidisciplinary team and that each of your team members are contributing their various strengths to the team. Roles in your team aren't there to create silos, burden individuals, or create hierarchy in the team. Roles should create clarity on what your team needs to do, those roles may be shared by the whole team, or be mostly owned by a person with certain skills.
\nYou can draw a team canvas that includes the various stakeholders and skill sets needed in your team. On this canvas you can place post-its that represent the roles your team has decided on.
\nIf you are practicing Scrum, you might include product owner in the product circle, developer in the technology circle, and scrum master in the middle of the Venn diagram. You will also need to include roles that Scrum doesn't define, such as user researcher and interaction designer, those go in the design circle.
\nNext to roles you can put more post-its with team members' names on them. You can, and probably should, have multiple post-its for one person. For example, your whole team may each rotate the Scrum Master hat, rather than have someone in that dedicated role for the entire flow. You could also treat product ownership and user research as whole team activities too.
\nWhile not necessarily related to the roles of the team, you can also include roles and names for the outer areas of the business, if applicable to your teams work – for instance, subject matter experts, organisational leadership and key users.
\nEach role needs responsibilities that are clearly defined too. In some cases such as product owner and scrum master, you can defer to textbook definitions. Sadly though, a lot of the responsibilities of healthy and successful teams aren't defined in guides to agile frameworks.
\nWhich roles are responsible for reporting regularly to the wider organisation about the progress of the team? How will you keep an eye on team health and happiness? Code quality? Again, these might not be owned by one person but you should express these explicitly and put one or more names against them.
\nBy the end of building your team canvas, you should be clear on the roles and responsibilities based on what you know now. Bear in mind, this can always be revised later on.
\nOh boy, I usually get really excited at this point.
\nNext, your team should decide on what core practices they want to use when working together. You can roughly categorise core practices into the Venn diagram again.
\nProduct management practices might include: planning poker, three amigos or user story mapping. You could decide to maintain a physical wall for this in a shared office space, or you may decide to keep it digital, using tools like Trello, JIRA or whatever your flavour of the month is.
\nDesign practices such as usability testing can be conducted on a regular basis to ensure the software you are building is usable. You could decide to maintain a research wiki to make it easily accessible to your team as well as other teams too.
\nTechnology practices are the backbone to delivering high quality software. Your team should be adopting test-driven and trunk-based development, pair programming, small releases, refactoring, peer reviews using pull requests, continuous integration and automated deployments.
\nYou'll never complete a full list of practices, especially as you may occasionally experiment and try new things as a team. The idea here is to define a set of sensible defaults the team should always be using.
\nFast feedback cycles are a key tenet of agile software delivery. They enable your team to minimise waste, make failing safe for all and guide you towards success.
\nYour teams can minimise waste by testing hypotheses before building anything. Researchers, designers and developers can work together to define problem and solution hypotheses that enable them to test ideas. You might choose to spend a certain amount of an iteration testing hypotheses with user research, surveys, technical spikes and prototypes.
\nBy releasing software early and often your team can have earlier opportunities to learn. You can test your software with users, particularly early adopters, to ensure you are building the right thing. The more frequently you can release software, the more opportunity you have to correct a wrong or find out if an idea has failed. Failing becomes safe when failure is cheap, and you make failure cheap by seeking feedback on a daily or weekly basis which will ensure failure costs you days rather than months.
\nEnsuring your team has enough feedback mechanisms built into their ways of working means that they will have regular opportunities to adjust their course. The more frequently the team can correct their bearing the sooner they will deliver software that generates value for both their organisation and its users.
\nWith the team you should discuss both formal and informal ways of collecting feedback.
\nYour team should decide what formal feedback they would like to include in their ways of working and when. This is usually done by agreeing practices that need to occur at certain points of your flow to encourage feedback. They generally should happen at those points in time unless there are exceptional circumstances.
\nUsing workshops like user story mapping and including organisational leadership, the team and other stakeholders, you can build consensus and receive feedback on your plan. Backlog planning is another way of ensuring your team can provide feedback to the product owner and vice-versa.
\nYou could consider using three amigos, a practice that historically included the product owner or business analyst, a tester and a developer. These days organisations that use automated testing may instead choose to include a user researcher or designer in place of a tester. You can use three amigos to ensure work is ready before it’s picked up for development, check that it meets the team definition of ‘done’ when complete, and then to prove it is having the impact you set out to achieve by reviewing KPIs and thorough testing with users.
\nDevelopers in your team may choose to use peer reviews before code is merged to provide the wider team, including designers, an opportunity to review work. Any visual changes should either be deployed in a dedicated environment for the review or at least be included as screenshots along with an explanation of the changes so that designers can give feedback before code is merged.
\nThe showcase and retrospective are common formal feedback mechanisms that most software development teams include in their process. The showcase enables the organisation and its users to provide feedback to the team. The retrospective enables the team to give feedback to itself.
\nYour team should also decide what informal feedback they wish to encourage. These are practices or team norms that can be used on an ad-hoc basis.
\nYour team should encourage a feedback culture where feedback is shared on a frequent basis when things go well and when they don't. Feedback cultures like this however require constant maintenance ensuring that team members feel safe receiving and giving feedback. It’s worth it though, after all, feedback is a gift to both your colleague and your future self.
\nYour team might occasionally include hypothesis testing as part of your iteration goals. By testing hypotheses in your iterations your team can work together to test ideas, whether that’s in user testing labs or building a prototype.
\nAnother practice you could adopt is research and design reviews, where the team gets together to review new research developments and designs in whatever form they take. This is an opportunity to review and critique designs with a cross section of the team.
\nYour team might choose to encourage pair programming some or all of the time. This enables pairs of developers to provide immediate feedback to each other as they work on a feature.
\nIf your team is collocated with organisational users, or can easily get to them, then encouraging ad-hoc by walking over to those users with laptop in hand to check something ensures very immediate feedback. You need to establish some etiquette with your users if you are planning on doing this, though. Try not to be “that” person.
\nI hope you didn't think you were going to define your ways of working up front and then be done with them? We’re talking about agile here rather than waterfall, buddy.
\nYou might find these ways of working difficult to uphold at all times. And that's okay! It's likely that they won't all necessarily meet your team's needs. You won't really know that until you get started. Rather than fascinating over it for too long, see if you can run a workshop for a couple of hours when forming your team to establish your initial ways of working and then establish a frequency at which you'll review them.
\nYou should use retrospectives as a weekly or fortnightly opportunity for your team to discuss the good, the bad and the ugly regarding your ways of working. Not every retrospective format will make it easy to do so but as long as you get an opportunity every few retrospectives that should be enough.
\nIt's worth putting in quarterly ways of working reviews with your team too. This can usually line up with your mission and quality goal reviews. Having a quarterly breakpoint to take a deep breath and reflect as a team for a day or two, or even a week, is both a great pressure valve and a revitaliser.
\nBy establishing and evolving your ways of working collaboratively, your team will work as an effective unit, be able to adapt to the challenges they face and iteratively release working software that delivers value for both your organisation and its users.
","plain":"On defining your ways of working: turning ideas into value, deciding how to work together, nurturing a feedback culture and adapting your process to meet changing needs.\n\n\nSoftware development teams are able to adapt their process to meet the problems they face. In a rapidly changing digital world, the ability to learn and adapt is a competitive advantage. Teams must be enabled to define their ways of working and continue to learn and adapt as they go.\nWays of working go way beyond just picking a software development process such as Scrum, Kanban, or (hopefully not) SAFe. Frameworks are never complete, except may be in the case of SAFe, which in my opinion is completely bonkers.\nIn a lot of ways, you need to forget about wholesale dropping in an entire process that was likely designed for very different needs than yours.\nDeciding as a team\nAs a self-organising team you need to have a shared understanding of your ways of working. You must come to that understanding together.\nYou may be working in an organisation that has established processes. You may not be the first software development team to form. Your team may have even worked together before. All the same, your team will have a new purpose and mission, solving a new problem, and therefore it is healthy to explicitly define your process together.\nWhat you decide now isn't set in stone. In fact, as you work towards your mission you will have to learn and adapt – your process should evolve as you do.\nStart by defining your flow, or as you may know it, product development lifecycle. You can do this by asking your team four questions:\n\n How will you turn ideas into value?\n How will you work together?\n How will you seek feedback?\n How will you evolve your process?\n\n\nLet’s think about these questions and how you can work with your team to define your ways of working. You can do this when first forming your team or if your team has yet to define your ways of working together.\nHow will you turn ideas into value?\nThe fundamentals of software development are simple: as a team you convert opportunities and ideas into value through delivering working software into the hands of users. Some teams call this flow a product development lifecycle. Some teams call it by the name of a particular agile framework like Scrum. What you call it isn't important, but having a shared understanding of how it works is critical.\nIt may well be that your team chooses Scrum as their agile delivery framework of choice. Unfortunately Scrum was created by developers and they forgot altogether to include instructions on how to incorporate design in the Scrum development process. Whoops.\nThe lack of design process in such frameworks then pushes design back into a waterfall "design up front territory" and is often disguised as a user-first discovery. Instead, you want designers and developers working hand-in-hand, working together to deliver valuable, usable and feasible software.\nTherefore your team can start with this or that framework but you must ultimately define a flow that works for your team, probably composing a number of different methodologies. This is what some people refer to as the idea of “being agile” rather than “doing agile”. There is no magic process, the magic is in your team's ability to adapt to their environment.\nTo create a shared understanding as a team you can map out your flow, artefacts and ceremonies on a whiteboard.\n\n\nFlow\nAs a team you need to understand how you are validating ideas and opportunities to ensure they deliver valuable, usable and feasible software. You need a way of turning those ideas into deliverable units of work. Once released, you must make sure to validate your software is having the desired impact. This is your flow.\nYour team should be clear on the stages of your flow and what enables them to progress work from one stage to another. Your flow should be simple enough to be drawn on a whiteboard in a minute or two and I'd recommend doing this whenever you review your ways of working.\nArtefacts\nArtefacts are what get your product owners and delivery managers excited. Popular examples include: product roadmaps, product backlogs, user story maps, sprint backlogs, Kanban boards, etc. I warn you now, many hours have been lost in perfecting artefacts which for the most part are ephemeral and thrown away.\nThe importance of artefacts is in their ability to be used as communication tools by your team and wider stakeholders. Artefacts don't work without shared understanding, you can't just show your boss your product backlog and expect them to understand it. Artefacts have to be used and changed communally.\nArtefacts help you communicate the stages of your flow and the work currently in those stages. As a team, you need to agree what artefacts you wish to use and how they relate to each stage of your flow. Try adding them to your flow on the whiteboard.\nCeremonies\nTeams use ceremonies to communicate and to evolve their shared understanding. Ceremonies can be used to celebrate successes and learn from mistakes, and ceremonies are the places where artefacts can be changed communally.\nCommon ceremonies include: backlog planning, backlog refinement, stand ups, stand downs, show and tells, showcases, and retrospectives.\nYou can definitely overload your team with too many ceremonies. No one likes being stuck in a day's worth of ceremonies every week or two, no matter how much you like your colleagues. Start with a minimum and make sure you timebox them, it should be exceptional for a ceremony to run longer than an hour, and your stand-ups shouldn’t turn into sit-downs.\nAs a team, you need to be clear on what ceremonies you wish to use, when they should occur, and how they relate to each stage of your flow. Finish the sketch of your flow by adding ceremonies to it.\nWhere do you start?\nIf you're not sure where to start, I recommend starting with a framework such as Scrum or Kanban. They provide a flow, a set of artefacts, and ceremonies though the latter Scrum calls events. I know I said you need to forget about wholesale dropping in frameworks, and you should, you can’t just expect them to work. They take time to adopt and adapt to your teams particular needs.\nWhat you won't get from Scrum or Kanban is how your team should work together to validate ideas, develop software or prove that your software is meeting the needs of your organisation and users. Don't worry, we'll get onto that next.\nHow will you work together?\nSo far we've focussed on processes and tools. If you've seen the agile manifesto you may now be feeling bad. After all:\n\nIndividuals and interactions over processes and toolsManifesto for Agile Software Development\n\nWell, I wouldn't feel too bad if I were you. It's not like the "Agile Manifesto" is saying you shouldn't have tools and processes. In fact, it's easier to talk about individuals and interactions with the backdrop of an easy to understand process.\nIndividuals and interactions are however more important than processes and tools. The quality and impact of your work is dependent on a team that works well together. You are solving problems, and how you do that may have to change, you can't just blindly follow a process and hope for the best.\nTogether you need to agree on your shared values that underpin your work. You will need to define roles and responsibilities that describe what tasks and activities need to be performed by the team. You should also define core practices you will use to complete your work.\nShared values\nValues are the shared attributes that you expect and encourage from each of your team members. Your values are the foundation of how you will work together as a team.\nThe Agile Manifesto has four values that are best summarised as: human-centred, working software, communication and responsiveness. Extreme Programming (XP) has five: communication, simplicity, feedback, courage, respect. Why doesn't anyone ever add “fun” to the list?\nI’d suggest an exercise where each team member spends some time individually coming up with values that they feel are important and align with your team's purpose and mission. You can then compare these together and arrive at 3-7 values you all agree on.\nRoles and responsibilities\nIt's important to remember that you're forming a multidisciplinary team and that each of your team members are contributing their various strengths to the team. Roles in your team aren't there to create silos, burden individuals, or create hierarchy in the team. Roles should create clarity on what your team needs to do, those roles may be shared by the whole team, or be mostly owned by a person with certain skills.\nYou can draw a team canvas that includes the various stakeholders and skill sets needed in your team. On this canvas you can place post-its that represent the roles your team has decided on.\n\n\nIf you are practicing Scrum, you might include product owner in the product circle, developer in the technology circle, and scrum master in the middle of the Venn diagram. You will also need to include roles that Scrum doesn't define, such as user researcher and interaction designer, those go in the design circle.\nNext to roles you can put more post-its with team members' names on them. You can, and probably should, have multiple post-its for one person. For example, your whole team may each rotate the Scrum Master hat, rather than have someone in that dedicated role for the entire flow. You could also treat product ownership and user research as whole team activities too.\nWhile not necessarily related to the roles of the team, you can also include roles and names for the outer areas of the business, if applicable to your teams work – for instance, subject matter experts, organisational leadership and key users.\nEach role needs responsibilities that are clearly defined too. In some cases such as product owner and scrum master, you can defer to textbook definitions. Sadly though, a lot of the responsibilities of healthy and successful teams aren't defined in guides to agile frameworks.\nWhich roles are responsible for reporting regularly to the wider organisation about the progress of the team? How will you keep an eye on team health and happiness? Code quality? Again, these might not be owned by one person but you should express these explicitly and put one or more names against them.\nBy the end of building your team canvas, you should be clear on the roles and responsibilities based on what you know now. Bear in mind, this can always be revised later on.\nCore practices\nOh boy, I usually get really excited at this point.\nNext, your team should decide on what core practices they want to use when working together. You can roughly categorise core practices into the Venn diagram again.\nProduct management practices might include: planning poker, three amigos or user story mapping. You could decide to maintain a physical wall for this in a shared office space, or you may decide to keep it digital, using tools like Trello, JIRA or whatever your flavour of the month is.\nDesign practices such as usability testing can be conducted on a regular basis to ensure the software you are building is usable. You could decide to maintain a research wiki to make it easily accessible to your team as well as other teams too.\nTechnology practices are the backbone to delivering high quality software. Your team should be adopting test-driven and trunk-based development, pair programming, small releases, refactoring, peer reviews using pull requests, continuous integration and automated deployments.\nYou'll never complete a full list of practices, especially as you may occasionally experiment and try new things as a team. The idea here is to define a set of sensible defaults the team should always be using.\nHow will you seek feedback?\nFast feedback cycles are a key tenet of agile software delivery. They enable your team to minimise waste, make failing safe for all and guide you towards success.\nYour teams can minimise waste by testing hypotheses before building anything. Researchers, designers and developers can work together to define problem and solution hypotheses that enable them to test ideas. You might choose to spend a certain amount of an iteration testing hypotheses with user research, surveys, technical spikes and prototypes.\nBy releasing software early and often your team can have earlier opportunities to learn. You can test your software with users, particularly early adopters, to ensure you are building the right thing. The more frequently you can release software, the more opportunity you have to correct a wrong or find out if an idea has failed. Failing becomes safe when failure is cheap, and you make failure cheap by seeking feedback on a daily or weekly basis which will ensure failure costs you days rather than months.\n\n\nEnsuring your team has enough feedback mechanisms built into their ways of working means that they will have regular opportunities to adjust their course. The more frequently the team can correct their bearing the sooner they will deliver software that generates value for both their organisation and its users.\nWith the team you should discuss both formal and informal ways of collecting feedback.\nFormal Feedback\nYour team should decide what formal feedback they would like to include in their ways of working and when. This is usually done by agreeing practices that need to occur at certain points of your flow to encourage feedback. They generally should happen at those points in time unless there are exceptional circumstances.\nUsing workshops like user story mapping and including organisational leadership, the team and other stakeholders, you can build consensus and receive feedback on your plan. Backlog planning is another way of ensuring your team can provide feedback to the product owner and vice-versa.\nYou could consider using three amigos, a practice that historically included the product owner or business analyst, a tester and a developer. These days organisations that use automated testing may instead choose to include a user researcher or designer in place of a tester. You can use three amigos to ensure work is ready before it’s picked up for development, check that it meets the team definition of ‘done’ when complete, and then to prove it is having the impact you set out to achieve by reviewing KPIs and thorough testing with users.\nDevelopers in your team may choose to use peer reviews before code is merged to provide the wider team, including designers, an opportunity to review work. Any visual changes should either be deployed in a dedicated environment for the review or at least be included as screenshots along with an explanation of the changes so that designers can give feedback before code is merged.\nThe showcase and retrospective are common formal feedback mechanisms that most software development teams include in their process. The showcase enables the organisation and its users to provide feedback to the team. The retrospective enables the team to give feedback to itself.\nInformal feedback\nYour team should also decide what informal feedback they wish to encourage. These are practices or team norms that can be used on an ad-hoc basis.\nYour team should encourage a feedback culture where feedback is shared on a frequent basis when things go well and when they don't. Feedback cultures like this however require constant maintenance ensuring that team members feel safe receiving and giving feedback. It’s worth it though, after all, feedback is a gift to both your colleague and your future self.\nYour team might occasionally include hypothesis testing as part of your iteration goals. By testing hypotheses in your iterations your team can work together to test ideas, whether that’s in user testing labs or building a prototype.\nAnother practice you could adopt is research and design reviews, where the team gets together to review new research developments and designs in whatever form they take. This is an opportunity to review and critique designs with a cross section of the team.\nYour team might choose to encourage pair programming some or all of the time. This enables pairs of developers to provide immediate feedback to each other as they work on a feature.\nIf your team is collocated with organisational users, or can easily get to them, then encouraging ad-hoc by walking over to those users with laptop in hand to check something ensures very immediate feedback. You need to establish some etiquette with your users if you are planning on doing this, though. Try not to be “that” person.\nHow will you evolve your process?\nI hope you didn't think you were going to define your ways of working up front and then be done with them? We’re talking about agile here rather than waterfall, buddy.\nYou might find these ways of working difficult to uphold at all times. And that's okay! It's likely that they won't all necessarily meet your team's needs. You won't really know that until you get started. Rather than fascinating over it for too long, see if you can run a workshop for a couple of hours when forming your team to establish your initial ways of working and then establish a frequency at which you'll review them.\nYou should use retrospectives as a weekly or fortnightly opportunity for your team to discuss the good, the bad and the ugly regarding your ways of working. Not every retrospective format will make it easy to do so but as long as you get an opportunity every few retrospectives that should be enough.\nIt's worth putting in quarterly ways of working reviews with your team too. This can usually line up with your mission and quality goal reviews. Having a quarterly breakpoint to take a deep breath and reflect as a team for a day or two, or even a week, is both a great pressure valve and a revitaliser.\nBy establishing and evolving your ways of working collaboratively, your team will work as an effective unit, be able to adapt to the challenges they face and iteratively release working software that delivers value for both your organisation and its users."}},{"title":{"html":"On how truly digital organisations include every team in their software development process. Technology is no longer just an IT affair.
","plain":"On how truly digital organisations include every team in their software development process. Technology is no longer just an IT affair."},"slug":"every-team-helps-to-develop-software","tags":["Software development","Software development teams"],"publishedAt":{"pretty":"10th May 2020","iso":"2020-05-10T00:00:00.000+00:00"},"featuredImage":"/media/every-team-social.png","content":{"html":"On how truly digital organisations include every team in their software development process. Technology is no longer just an IT affair.
\nAs traditional sales and service channels are replaced by digital means, software has been brought to the organisational frontline and the back office too. Software now impacts every member of staff in their day-to-day.
\nTechnology startups embrace the symbiotic relationship between software and the services they provide their customers. By pivoting and iterating both their business model and their software together, they are able to sense and respond in competitive markets.
\nThink about Uber's ability to launch a new service, Uber Eats, in response to consumer demand for simple food delivery. Their software development teams were able to respond with technology at a rapid pace that enabled them to enter a competitive market with a compelling product.
\nThe opposite is true for disrupted organisations, such as banks, as they have to play catch-up feature by feature, rather than being able to innovate their services and software in parallel. They have not embraced a sense and respond approach, they have not empowered their software development teams, and they haven't got every team in the organisation feeding into the process.
\nFor organisations to be transformed, they must ensure that every team is represented in the software development team.
\nThe value that users derive from your software is in enabling their needs to be met. Users are employing your software to complete a task; they might be buying new clothes, renewing a driving license or preparing month-end accounts. For users to do that successfully the software must too be usable.
\nAll too often software is shaped to solve organisational needs, rather than user needs. Why did I have to confirm my email address with Atlassian as I logged into Trello, even though I've been signed up to Trello for years? Why did I have to jump through confusing hoops – in this example, a multistep form – completely unrelated to my own needs of using Trello?
\nI later found that my username all over Trello had changed to an outdated username from my days of using BitBucket. To resolve that I then had to sign into Atlassian once again and change my details there, just so my colleagues that I shared Trello boards with would stop teasing me about my old gamer username.
\nOf course, organisational and user needs must be balanced. After all, it is organisational resources being spent on software so that the organisation's own needs are met. An e-commerce business wants to sell its wares in order to generate revenue and profit. That's the organisational value derived from e-commerce software.
\nThe important thing for organisations to remember, is that in order for its needs to be met sustainably, its software must also meet the needs of its users:
\nIt is therefore critical to represent both sets of needs in your team. The whole team ought to be empathetic towards both users and the organisation. User researchers and designers in particular can help focus the team on user needs, but design and research should be a team sport that everyone is involved in.
\nA subject-matter expert (SME) can also be included in your team to represent organisational policy and users within your team. Having a finance controller as an SME in a software development team developing a refunds and reconciliation service means that they will be able to explain finance jargon and can be a proxy for other finance users.
\nSoftware is critical to staff that use it to interact with customers or colleagues while performing their role. There is a huge opportunity to close the feedback loop by ensuring your software development team works closely with staff users.
\nCollocating your team with the staff who will use the software they are building will enable fast feedback opportunities. Just walk your laptop over to a potential staff member and get that feedback. There's no need to wait for testing labs in a week or two.
\nBy working closely with staff, you can not only digitise their processes but transform the way they work altogether. Coming back to the symbiotic relationship that technology startups have between business model and software, you can create this same relationship between your software development team and staff users.
\nBy mapping the processes of service areas, you can help staff get a better understanding of the way they work and this provides them with tools to self-reflect on their process and how it might be improved regardless of technology. It also provides an opportunity to support and challenge them by showing the art of the possible with software.
\nOrganisational leadership must be represented within the team. Your team requires this leadership capability in order to make solid prioritisation decisions. This leadership ought to be able to defend the teams ability to pivot and adjust their plan based on their learnings.
\nYou aren't necessarily going to be able to have the CEO of your organisation as part of your team on a daily basis; not even your CIO or CTO will be able to afford that kind of involvement if your organisation is at any kind of scale.
\nWhat you should be able to have is a service or product owner, who is involved on a regular basis and is empowered to make decisions on the organisations behalf. Your design and technical leaders within the team should also be able to make decisions autonomously regarding usability and feasibility.
\nThis leadership must be able to challenge the wider organisation. If issues outside the remit of the team are blocking progress, or if policy and service areas need to be informed by learnings from the software development process, then the leadership within the team has a responsibility – and must be able to challenge the organisation.
\nSoftware, and digital transformation in general, presents a great opportunity for organisational change. By acting more like technology startups, slow-moving organisations can begin to embrace change more readily.
\nOnly when your software development team works hand-in-hand with the wider organisation and it's users, can they maximise the impact of the software they are developing.
","plain":"On how truly digital organisations include every team in their software development process. Technology is no longer just an IT affair.\n\n\nAs traditional sales and service channels are replaced by digital means, software has been brought to the organisational frontline and the back office too. Software now impacts every member of staff in their day-to-day.\nTechnology startups embrace the symbiotic relationship between software and the services they provide their customers. By pivoting and iterating both their business model and their software together, they are able to sense and respond in competitive markets.\nThink about Uber's ability to launch a new service, Uber Eats, in response to consumer demand for simple food delivery. Their software development teams were able to respond with technology at a rapid pace that enabled them to enter a competitive market with a compelling product.\nThe opposite is true for disrupted organisations, such as banks, as they have to play catch-up feature by feature, rather than being able to innovate their services and software in parallel. They have not embraced a sense and respond approach, they have not empowered their software development teams, and they haven't got every team in the organisation feeding into the process.\nFor organisations to be transformed, they must ensure that every team is represented in the software development team.\nBalancing user and organisational needs\nThe value that users derive from your software is in enabling their needs to be met. Users are employing your software to complete a task; they might be buying new clothes, renewing a driving license or preparing month-end accounts. For users to do that successfully the software must too be usable.\nAll too often software is shaped to solve organisational needs, rather than user needs. Why did I have to confirm my email address with Atlassian as I logged into Trello, even though I've been signed up to Trello for years? Why did I have to jump through confusing hoops – in this example, a multistep form – completely unrelated to my own needs of using Trello?\nI later found that my username all over Trello had changed to an outdated username from my days of using BitBucket. To resolve that I then had to sign into Atlassian once again and change my details there, just so my colleagues that I shared Trello boards with would stop teasing me about my old gamer username.\nOf course, organisational and user needs must be balanced. After all, it is organisational resources being spent on software so that the organisation's own needs are met. An e-commerce business wants to sell its wares in order to generate revenue and profit. That's the organisational value derived from e-commerce software.\n\n\nThe important thing for organisations to remember, is that in order for its needs to be met sustainably, its software must also meet the needs of its users:\n\nIf your e-commerce store sucks and you've got competition: your revenue will not be sustainable\nIf you don't have competition: you are a sitting duck awaiting disruption\n\nIt is therefore critical to represent both sets of needs in your team. The whole team ought to be empathetic towards both users and the organisation. User researchers and designers in particular can help focus the team on user needs, but design and research should be a team sport that everyone is involved in.\nA subject-matter expert (SME) can also be included in your team to represent organisational policy and users within your team. Having a finance controller as an SME in a software development team developing a refunds and reconciliation service means that they will be able to explain finance jargon and can be a proxy for other finance users.\nSupporting staff\nSoftware is critical to staff that use it to interact with customers or colleagues while performing their role. There is a huge opportunity to close the feedback loop by ensuring your software development team works closely with staff users.\nCollocating your team with the staff who will use the software they are building will enable fast feedback opportunities. Just walk your laptop over to a potential staff member and get that feedback. There's no need to wait for testing labs in a week or two.\nBy working closely with staff, you can not only digitise their processes but transform the way they work altogether. Coming back to the symbiotic relationship that technology startups have between business model and software, you can create this same relationship between your software development team and staff users.\nBy mapping the processes of service areas, you can help staff get a better understanding of the way they work and this provides them with tools to self-reflect on their process and how it might be improved regardless of technology. It also provides an opportunity to support and challenge them by showing the art of the possible with software.\nEmpowered to challenge\nOrganisational leadership must be represented within the team. Your team requires this leadership capability in order to make solid prioritisation decisions. This leadership ought to be able to defend the teams ability to pivot and adjust their plan based on their learnings.\nYou aren't necessarily going to be able to have the CEO of your organisation as part of your team on a daily basis; not even your CIO or CTO will be able to afford that kind of involvement if your organisation is at any kind of scale.\nWhat you should be able to have is a service or product owner, who is involved on a regular basis and is empowered to make decisions on the organisations behalf. Your design and technical leaders within the team should also be able to make decisions autonomously regarding usability and feasibility.\nThis leadership must be able to challenge the wider organisation. If issues outside the remit of the team are blocking progress, or if policy and service areas need to be informed by learnings from the software development process, then the leadership within the team has a responsibility – and must be able to challenge the organisation.\nSoftware is a good excuse for organisational change\nSoftware, and digital transformation in general, presents a great opportunity for organisational change. By acting more like technology startups, slow-moving organisations can begin to embrace change more readily.\nOnly when your software development team works hand-in-hand with the wider organisation and it's users, can they maximise the impact of the software they are developing."}},{"title":{"html":"On software development teams: how to form one, what good looks like and how to set them up for success.
","plain":"On software development teams: how to form one, what good looks like and how to set them up for success."},"slug":"what-is-a-software-development-team","tags":["Software development","Software development teams"],"publishedAt":{"pretty":"6th May 2020","iso":"2020-05-06T00:00:00.000+00:00"},"featuredImage":"/media/successful-team.png","content":{"html":"On software development teams: how to form one, what good looks like and how to set them up for success.
\nDigital technology is now pervasive through societies world wide. Industries have been disrupted – even governments, too – and we now find ourselves in the software age. Software development is mainstream and organisations have now realised that in order to stay relevant they must embrace software to the point where most are now funding their own software teams.
\nWhat exactly is a software development team? The answer may seem obvious and I suppose it is in some ways – clearly, it's a team that develops software. How useful is that definition for organisations forming software development teams for the first time? What does a good one look like? How do you know if your software development team is set up for success?
\nThree factors need to be considered when developing software: value, usability and feasibility. Your team should have the necessary skills to ensure these factors are carefully balanced. The software they develop needs to be valuable, usable and feasible.
\nA successful team will weigh up each of these factors and make tradeoffs where necessary when deciding what to build, when to build and how to build. Failing to balance these factors can have undesired consequences:
\nThe last point was a little tongue-in-cheek, as you might have detected. It's an important point, though. Often business users do not get a choice in the software they use to do their job. Often their managers or directors will have made a purchasing decision based on cost rather than usability.
\nA multidisciplinary approach is needed for each of these factors to be considered. We combine two sets of needs with the skills required to develop software. We combine organisational needs with the needs of it's users to identify value. We bring both design and development folk together to translate this value into usable and feasible software, or in other words, working software.
\nThere are five properties a team must have in order to be successful in developing working software: empathy, trust, capability, time and feedback.
\nA high degree of empathy is needed when developing software and in fact, is a property of all high-performing teams. Empathy for each other is needed in order for a team to work well together and understand each other. Diversity can be a strength for teams but only when the team also has empathy. Empathy is also needed for organisational and user needs. A team must understand the drivers behind the software they are developing.
\nThe team must have trust from the wider organisation so that they may be empowered to solve problems, test their hypotheses and release the software they develop. If the team is not empowered to decide what to build, how to build, or when they can release software, they are not a software development team, they are a feature factory.
\nYour team need the capability to derive value from the business and user needs, in order to build usable and feasible software. This does not necessarily mean you need a team of distinct specialisms; you could instead form a team of generalists, but whatever you do, you need a team who are capable of developing working software.
\nA team requires the time to turn the business and user needs into working software. Every member of the team needs to be explicit in terms of their time commitment to developing the software. The team as a whole needs enough time in order to develop working software. The team needs time together where context and understanding can be shared, but also time apart to allow thoughts to be digested.
\nA continuous cycle of feedback is the only way a team will be able to test whether what they're developing is valuable, usable and feasible. Feedback is the mechanism by which a team learns together. The more frequent your team can learn, the faster they will be able to adapt their plans and deliver value.
\nEach of the properties need to be understood when forming a team, and be protected throughout the lifetime of the team.
\nA team first and foremost must have empathy for each other and the people they are solving problems for: their users and even their most tricky organisational stakeholders. Empathy must be nurtured throughout the course of the team's life in order for it to be maintained. It is critical for everyone, but especially leaders, to demonstrate and champion empathy.
\nHaving a shared understanding and purpose is a good starting point for focusing the team and fostering empathy. There a number of ways you can express a team's purpose:
\nYour team may complete one or more of the above, or use a different activity altogether, the important thing here is that they have a shared understanding of why the team exists, the challenges they face and that the challenges can only be overcome through teamwork.
\nThe organisation must trust the team to solve problems and should therefore give the team a problem to solve rather than instructions to be completed. Your team may at this point wish to define a problem statement, or the sponsor establishing the team may seek to define a problem statement with the team.
\nA good problem statement will explain a desired state or vision that isn't currently achieved. It will explain the current reality or context. It will explain the impact of action or inaction. Lastly, it will set out a question or hypothesis along with measures to prove or disprove success.
\nWith empathy and trust established, next you need to ensure your team has the capabilities and time needed to solve the problem it has been set. This means the team needs a budget to cover both staff time and expenses. If your team can't dedicate time to their mission then it won't be achieved.
\nThis also means recruiting a team with necessary skills. The team will need the leadership necessary to make decisions on value, usability and feasibility. This will often involve product, design and development skillsets, but may also include subject matter experts, senior leadership and even users.
\nFinally, the team need to define and agree ways of working that enable rapid feedback cycles. Regardless of your approach to process you need to ensure your team are regularly:
\nShort feedback cycles will minimise wasted time building the wrong thing and will guide the team towards developing working software.
\nA team must be healthy in order to be sustainably successful. The health of a team has several measures including safety, balance, learning and happiness.
\nThe team needs certain safety levels in order to be honest and productive while working together. You can measure safety levels by looking at the team's ability to express how they're feeling. Can they provide and receive honest feedback? Is blame projected when things go wrong? Do they feel they can push back if they disagree? Are they able to compromise in order to move forward?
\nTeam members need to balance their work with the rest of their lives. Balance is required in order to sustainably develop software. You can measure balance by looking at how frequently people are starting early or working late in order to meet goals. Are the team taking adequate holiday? How often are the team crunching? Are there high levels of sickness in the team?
\nTeams have to be empowered to make their own decisions and solve problems. How often are your team blocked by outside influences? Does the team have the necessary leadership within the team to make decisions? Is the team trusted by its organisation?
\nTeams should always be learning. Does the team feel like they are learning on a regular basis? Is the team's confidence growing with time? Are the goals of the team in line with each member's career goals?
\nA healthy team is a happy team. Happiness is perhaps the hardest measure of success as everyone has ups and downs, and so happiness will vary but the team as a whole may still be healthy. To measure happiness you need to look at the team as a whole as well as on an individual basis. Is the majority of the team unhappy? Is only one person really unhappy? You need to dig into the why behind happiness and unhappiness in order for it to be useful. Use regular check-ins with your team to understand their happiness levels.
\nThere is only one measure of success for a software team: are they developing software that is valuable, usable and feasible? Are they developing working software?
\nThe team should be trusted to solve a problem. Along with that comes the responsibility of proving their success. This is both rewarding and fair. Proving or disproving their success is also important for feedback and adapting the plan based on new learnings. A team should have the freedom to iterate towards success.
\nMeasuring value, usability and feasibility should be a continuous process of the team. At the outset the team should have defined the reason why they exist and have a clear problem statement they are working on solving. The team should be reflecting on a regular basis as to whether they are achieving their mission.
\nProblem statements and missions can sometimes be long-lived, and so the team will also want to set out smaller problems, missions or hypotheses to work on. Each should have experiments and measurable objectives that can be used to determine success.
\nAt no point should the team consider they are done after releasing software. Before they are done they must prove what they released is working. Only then are they truly accountable and empowered to develop software that is valuable, usable and feasible. Only then have they delivered working software.
","plain":"On software development teams: how to form one, what good looks like and how to set them up for success.\nDigital technology is now pervasive through societies world wide. Industries have been disrupted – even governments, too – and we now find ourselves in the software age. Software development is mainstream and organisations have now realised that in order to stay relevant they must embrace software to the point where most are now funding their own software teams.\nWhat exactly is a software development team? The answer may seem obvious and I suppose it is in some ways – clearly, it's a team that develops software. How useful is that definition for organisations forming software development teams for the first time? What does a good one look like? How do you know if your software development team is set up for success?\nThey develop working software\nThree factors need to be considered when developing software: value, usability and feasibility. Your team should have the necessary skills to ensure these factors are carefully balanced. The software they develop needs to be valuable, usable and feasible.\n\n\nA successful team will weigh up each of these factors and make tradeoffs where necessary when deciding what to build, when to build and how to build. Failing to balance these factors can have undesired consequences:\n\nIf what you develop is valuable and usable but not feasible you'll never actually release any software\nIf what you build is usable and feasible but not valuable to your organisation, then you won't be able to claim any return on your investment\nIf what you build is usable and feasible but not valuable to your users, then I'm sorry to say but no one will use your software\nIf what you build is valuable and feasible but not usable, congratulations – you built enterprise software\n\nThe last point was a little tongue-in-cheek, as you might have detected. It's an important point, though. Often business users do not get a choice in the software they use to do their job. Often their managers or directors will have made a purchasing decision based on cost rather than usability.\nA multidisciplinary approach is needed for each of these factors to be considered. We combine two sets of needs with the skills required to develop software. We combine organisational needs with the needs of it's users to identify value. We bring both design and development folk together to translate this value into usable and feasible software, or in other words, working software.\nProperties of a successful team\nThere are five properties a team must have in order to be successful in developing working software: empathy, trust, capability, time and feedback.\n\n\nA high degree of empathy is needed when developing software and in fact, is a property of all high-performing teams. Empathy for each other is needed in order for a team to work well together and understand each other. Diversity can be a strength for teams but only when the team also has empathy. Empathy is also needed for organisational and user needs. A team must understand the drivers behind the software they are developing.\nThe team must have trust from the wider organisation so that they may be empowered to solve problems, test their hypotheses and release the software they develop. If the team is not empowered to decide what to build, how to build, or when they can release software, they are not a software development team, they are a feature factory.\nYour team need the capability to derive value from the business and user needs, in order to build usable and feasible software. This does not necessarily mean you need a team of distinct specialisms; you could instead form a team of generalists, but whatever you do, you need a team who are capable of developing working software.\nA team requires the time to turn the business and user needs into working software. Every member of the team needs to be explicit in terms of their time commitment to developing the software. The team as a whole needs enough time in order to develop working software. The team needs time together where context and understanding can be shared, but also time apart to allow thoughts to be digested.\nA continuous cycle of feedback is the only way a team will be able to test whether what they're developing is valuable, usable and feasible. Feedback is the mechanism by which a team learns together. The more frequent your team can learn, the faster they will be able to adapt their plans and deliver value.\nForming a team\nEach of the properties need to be understood when forming a team, and be protected throughout the lifetime of the team.\nA team first and foremost must have empathy for each other and the people they are solving problems for: their users and even their most tricky organisational stakeholders. Empathy must be nurtured throughout the course of the team's life in order for it to be maintained. It is critical for everyone, but especially leaders, to demonstrate and champion empathy.\nHaving a shared understanding and purpose is a good starting point for focusing the team and fostering empathy. There a number of ways you can express a team's purpose:\n\nA purpose, vision, mission and values statement\nObjectives in the form of OKRs, MOKRs and/or KPIs\nA problem statement\n\nYour team may complete one or more of the above, or use a different activity altogether, the important thing here is that they have a shared understanding of why the team exists, the challenges they face and that the challenges can only be overcome through teamwork.\nThe organisation must trust the team to solve problems and should therefore give the team a problem to solve rather than instructions to be completed. Your team may at this point wish to define a problem statement, or the sponsor establishing the team may seek to define a problem statement with the team.\nA good problem statement will explain a desired state or vision that isn't currently achieved. It will explain the current reality or context. It will explain the impact of action or inaction. Lastly, it will set out a question or hypothesis along with measures to prove or disprove success.\nWith empathy and trust established, next you need to ensure your team has the capabilities and time needed to solve the problem it has been set. This means the team needs a budget to cover both staff time and expenses. If your team can't dedicate time to their mission then it won't be achieved.\nThis also means recruiting a team with necessary skills. The team will need the leadership necessary to make decisions on value, usability and feasibility. This will often involve product, design and development skillsets, but may also include subject matter experts, senior leadership and even users.\nFinally, the team need to define and agree ways of working that enable rapid feedback cycles. Regardless of your approach to process you need to ensure your team are regularly:\n\nProviding feedback to each other\nSeeking feedback from the organisation and users\nLearning from feedback and deliberate practice\nChanging their plan in accordance to feedback\n\nShort feedback cycles will minimise wasted time building the wrong thing and will guide the team towards developing working software.\nMeasures of a healthy team\nA team must be healthy in order to be sustainably successful. The health of a team has several measures including safety, balance, learning and happiness.\n\n\nThe team needs certain safety levels in order to be honest and productive while working together. You can measure safety levels by looking at the team's ability to express how they're feeling. Can they provide and receive honest feedback? Is blame projected when things go wrong? Do they feel they can push back if they disagree? Are they able to compromise in order to move forward?\nTeam members need to balance their work with the rest of their lives. Balance is required in order to sustainably develop software. You can measure balance by looking at how frequently people are starting early or working late in order to meet goals. Are the team taking adequate holiday? How often are the team crunching? Are there high levels of sickness in the team?\nTeams have to be empowered to make their own decisions and solve problems. How often are your team blocked by outside influences? Does the team have the necessary leadership within the team to make decisions? Is the team trusted by its organisation?\nTeams should always be learning. Does the team feel like they are learning on a regular basis? Is the team's confidence growing with time? Are the goals of the team in line with each member's career goals?\nA healthy team is a happy team. Happiness is perhaps the hardest measure of success as everyone has ups and downs, and so happiness will vary but the team as a whole may still be healthy. To measure happiness you need to look at the team as a whole as well as on an individual basis. Is the majority of the team unhappy? Is only one person really unhappy? You need to dig into the why behind happiness and unhappiness in order for it to be useful. Use regular check-ins with your team to understand their happiness levels.\nThe measure of a successful team\nThere is only one measure of success for a software team: are they developing software that is valuable, usable and feasible? Are they developing working software?\nThe team should be trusted to solve a problem. Along with that comes the responsibility of proving their success. This is both rewarding and fair. Proving or disproving their success is also important for feedback and adapting the plan based on new learnings. A team should have the freedom to iterate towards success.\nMeasuring value, usability and feasibility should be a continuous process of the team. At the outset the team should have defined the reason why they exist and have a clear problem statement they are working on solving. The team should be reflecting on a regular basis as to whether they are achieving their mission.\nProblem statements and missions can sometimes be long-lived, and so the team will also want to set out smaller problems, missions or hypotheses to work on. Each should have experiments and measurable objectives that can be used to determine success.\nAt no point should the team consider they are done after releasing software. Before they are done they must prove what they released is working. Only then are they truly accountable and empowered to develop software that is valuable, usable and feasible. Only then have they delivered working software."}},{"title":{"html":"On the irony that social distancing has brought us all together more frequently. I think I'm going to need to create some digital social distancing for myself too.
","plain":"On the irony that social distancing has brought us all together more frequently. I think I'm going to need to create some digital social distancing for myself too."},"slug":"digital-social-distancing","tags":["COVID19"],"publishedAt":{"pretty":"2nd April 2020","iso":"2020-04-02T00:00:00.000+00:00"},"featuredImage":"/media/social-digital-distancing.jpg","content":{"html":"On the irony that social distancing has brought us all together more frequently. I think I'm going to need to create some digital social distancing for myself too.
\nIs this how fully remote companies always feel?
\nI don't know about you but I've spent more time on video calls in the last two weeks than I ever have before. Whether it's close to 8-9 hours of back-to-back calls for work, or catching up with my family in the evenings – I'm finding myself exhausted with all this connectivity.
\nIn a moment where we are socially distancing from each other, physically we are being drawn together through digital mediums. It's lovely in a sense that we do indeed cling together – that's another tune that rings in my ear as I write this. I get warm fuzzies thinking about the love people are generally showing each other right now. My faith in humanity is rewarded by the kindness in my neighbours eyes – even if they have got their mouths covered as we exchange glances in the hallway. What a weird time.
\nI have questions, as you probably do too. I've been fascinating over why all this digital connectivity is so tiring. Why is it more tiring to spend time on calls than in the office? Why is my diary even busier than usual and why am I dialled into more meetings now, even though my workload hasn't exactly increased? If anything, it’s decreased. Weird.
\nAnother weird thing I've noticed is that we have less time for novel side chats and small talk. There’s no way to turn to a colleague and whisper something to them. I'm sure most of my colleagues are happier that I'm unable to disrupt meetings in that way – silver linings?
\nI've also found larger group meetings are more like one direction webinars. It's weird talking into a silent void. I love feedback, I thrive on it! I've also found that some interviewees really don't open up unless you give them audio cues that you're listening and engaging with them. "Everybody, go on mute" doesn't work for everybody.
\nAnyhow, I'm tired. And I can only move as slow as my diary lets me. It's time for some digital social distancing.
\nI was sitting at my desk frustrated at my calendar looking busier than ever. We had our quarterly leadership planning scheduled for Monday and Tuesday – and full-day workshops, at that. I decided to send a message on Slack to my colleagues.
\nIn the first instance I asked Rory, our CEO, if we could limit our sessions to 08:30-11:30 and then 13:30-15:30 to ensure we had big enough breaks and weren't spending more than 5 hours of the day on calls. He approved and I'm so grateful that he did; my brain felt all the better for it this week and we still achieved all our goals for the two days of planning.
\nMore generally I've put the following measures in place:
\nThe aim of these measures is to force myself and others to respect my time. It is also a way of prioritising. Much like agile delivery teams using timeboxes to constrain and force prioritisation, I'm using the same technique with all of my commitments.
\nAnyhow, trial and error, next week is my first full week with these all in place so I'll let you know how it goes. I hope I can continue moving slow.
\nI'd be interested in hearing from anyone else who is experiencing similar exhaustion from calls and what digital social distancing you might be doing to give yourself more space. Let me know what you think on Twitter @LukeMorton.
","plain":"On the irony that social distancing has brought us all together more frequently. I think I'm going to need to create some digital social distancing for myself too.\n\n\nIs this how fully remote companies always feel?\nI don't know about you but I've spent more time on video calls in the last two weeks than I ever have before. Whether it's close to 8-9 hours of back-to-back calls for work, or catching up with my family in the evenings – I'm finding myself exhausted with all this connectivity.\nIn a moment where we are socially distancing from each other, physically we are being drawn together through digital mediums. It's lovely in a sense that we do indeed cling together – that's another tune that rings in my ear as I write this. I get warm fuzzies thinking about the love people are generally showing each other right now. My faith in humanity is rewarded by the kindness in my neighbours eyes – even if they have got their mouths covered as we exchange glances in the hallway. What a weird time.\nI have questions, as you probably do too. I've been fascinating over why all this digital connectivity is so tiring. Why is it more tiring to spend time on calls than in the office? Why is my diary even busier than usual and why am I dialled into more meetings now, even though my workload hasn't exactly increased? If anything, it’s decreased. Weird.\nAnother weird thing I've noticed is that we have less time for novel side chats and small talk. There’s no way to turn to a colleague and whisper something to them. I'm sure most of my colleagues are happier that I'm unable to disrupt meetings in that way – silver linings?\nI've also found larger group meetings are more like one direction webinars. It's weird talking into a silent void. I love feedback, I thrive on it! I've also found that some interviewees really don't open up unless you give them audio cues that you're listening and engaging with them. "Everybody, go on mute" doesn't work for everybody.\nAnyhow, I'm tired. And I can only move as slow as my diary lets me. It's time for some digital social distancing.\nMy measures for digital social distancing\nI was sitting at my desk frustrated at my calendar looking busier than ever. We had our quarterly leadership planning scheduled for Monday and Tuesday – and full-day workshops, at that. I decided to send a message on Slack to my colleagues.\nIn the first instance I asked Rory, our CEO, if we could limit our sessions to 08:30-11:30 and then 13:30-15:30 to ensure we had big enough breaks and weren't spending more than 5 hours of the day on calls. He approved and I'm so grateful that he did; my brain felt all the better for it this week and we still achieved all our goals for the two days of planning.\nMore generally I've put the following measures in place:\n\nI've set my Google Calendar office hours to 08:30-15:30 – I'm not accepting any meetings outside of these hours\nI've set my Google Calendar events to only show if I've accepted them in my email inbox – this means I need to have explicitly said yes for me to attend a meeting, stopping last-minute invites that I'm not aware of\nI'm only accepting meetings of up to 2 hours per week for each service area of the business I'm involved in\nI'm continuing to provide 30 minutes every two weeks for all of my line management/career development 121s\nI'm only accepting 10 minute coffees for anyone else who needs my time\n\nThe aim of these measures is to force myself and others to respect my time. It is also a way of prioritising. Much like agile delivery teams using timeboxes to constrain and force prioritisation, I'm using the same technique with all of my commitments.\nAnyhow, trial and error, next week is my first full week with these all in place so I'll let you know how it goes. I hope I can continue moving slow.\nI'd be interested in hearing from anyone else who is experiencing similar exhaustion from calls and what digital social distancing you might be doing to give yourself more space. Let me know what you think on Twitter @LukeMorton."}},{"title":{"html":"On slowing the pace of life – we all face it whether you like the idea or not. I've been facing up to this reality and have found the time and space to share my thoughts here.
","plain":"On slowing the pace of life – we all face it whether you like the idea or not. I've been facing up to this reality and have found the time and space to share my thoughts here."},"slug":"moving-slow","tags":["COVID19"],"publishedAt":{"pretty":"22nd March 2020","iso":"2020-03-22T00:00:00.000+00:00"},"featuredImage":"/media/moving-slow.jpg","content":{"html":"On slowing the pace of life – we all face it whether you like the idea or not. I've been facing up to this reality and have found the time and space to share my thoughts here.
\nStrange times, aren't they? As I sit here reflecting on a Sunday morning, looking out over Manchester city centre from my flat window, I can't help but think about the lyric "easy like Sunday morning". It's not just Sunday that's moving slow though, right? It's as if time itself has slowed.
\nPlease excuse the length of the next paragraph...
\nThe last few months have seen me at my busiest. I've recently opened Made Tech's Manchester office while trying to keep up with the duties of CTO for our group. A common week would see me involved in writing sales proposals; hiring engineers, delivery folk and management staff for Manchester; ensuring existing client projects are running smoothly across London, Manchester and Newcastle; line-managing our senior engineers; attending our weekly senior leadership team meeting; helping roll out a change to Made Tech's operating model; writing and reviewing content for our new book; networking for new business opportunities; ensuring new client contracts are moving ahead and getting signed; trying to work out how I open up an office in Newcastle; lining up more office space for Manchester for the third time since we moved here, as we're growing much faster than expected; and somewhere in that, trying to find time to find the time to get to hospital for my diabetes checkup.
\nI know I'm in a massively fortunate position to still have a job and to have the time to breathe when so many can't, but in the spirit of being open about mental wellbeing, that was pretty cathartic to write. I know I'm balancing a lot. I know I have to probably let go of some of the above. I know I need to make more space for myself. I know I'm somewhat addicted to my work in an unhealthy but probably fairly common way. Little did I know I was on a collision course.
\nCrash. I smash into the reality of this virus like a car colliding with a brick wall. Thrown through the windscreen, or perhaps the looking glass, I find myself in an alternate reality. I find myself moving slow.
\nI've spent the past week since last Sunday mostly in my flat. I was super focussed going into Monday; I had a sales opportunity to work on and working from home I had no distractions. Finally, some dedicated and focussed time. Once I completed my part of the sales proposal on Wednesday evening, I slumped. I'd had a cold for a while, but I hadn't the time for a cold and so it didn't exist – until it did. Until my work was done and my isolation realised itself in my head. With no more work to do and lack of social interactions to keep me busy, I let myself slump.
\nI took Thursday and Friday to myself. I let my heart calm down. I let my head clear all the noise. As I said before, I'd usually have worked through a cold like this, but I decided I deserved some time – I needed a mental recovery. I let go and got used to moving slow.
\nSo. It's Sunday. I'm looking out of my window and I'm calm. It's a beautiful blue sky out there, I can feel the warmth on my skin through the window. It is only this morning that I realise how much I needed to catch my breath. In this alternate reality there really is more time, time has warped, and as horrible and weird as things are, I'm taking this time for myself.
\nI'm taking more time to make my coffee. I'm finally reading my amassed book collection. I'm sitting and listening to music while doing nothing else. Regular me would be having fits at the lack of productivity. But these aren't regular times, are they?
\nI'm sending out positive thoughts to all the people I know and to those I don't. I hope on this Mother's Day you can find, at least a moment of, peace in this solitude. Feel free to reach out to me on Twitter, if it helps. I'll be right here. I've got plenty of time.
\nI'm moving slow.
","plain":"On slowing the pace of life – we all face it whether you like the idea or not. I've been facing up to this reality and have found the time and space to share my thoughts here.\n\n\nStrange times, aren't they? As I sit here reflecting on a Sunday morning, looking out over Manchester city centre from my flat window, I can't help but think about the lyric "easy like Sunday morning". It's not just Sunday that's moving slow though, right? It's as if time itself has slowed.\nPlease excuse the length of the next paragraph...\nThe last few months have seen me at my busiest. I've recently opened Made Tech's Manchester office while trying to keep up with the duties of CTO for our group. A common week would see me involved in writing sales proposals; hiring engineers, delivery folk and management staff for Manchester; ensuring existing client projects are running smoothly across London, Manchester and Newcastle; line-managing our senior engineers; attending our weekly senior leadership team meeting; helping roll out a change to Made Tech's operating model; writing and reviewing content for our new book; networking for new business opportunities; ensuring new client contracts are moving ahead and getting signed; trying to work out how I open up an office in Newcastle; lining up more office space for Manchester for the third time since we moved here, as we're growing much faster than expected; and somewhere in that, trying to find time to find the time to get to hospital for my diabetes checkup.\nI know I'm in a massively fortunate position to still have a job and to have the time to breathe when so many can't, but in the spirit of being open about mental wellbeing, that was pretty cathartic to write. I know I'm balancing a lot. I know I have to probably let go of some of the above. I know I need to make more space for myself. I know I'm somewhat addicted to my work in an unhealthy but probably fairly common way. Little did I know I was on a collision course.\nCrash. I smash into the reality of this virus like a car colliding with a brick wall. Thrown through the windscreen, or perhaps the looking glass, I find myself in an alternate reality. I find myself moving slow.\nI've spent the past week since last Sunday mostly in my flat. I was super focussed going into Monday; I had a sales opportunity to work on and working from home I had no distractions. Finally, some dedicated and focussed time. Once I completed my part of the sales proposal on Wednesday evening, I slumped. I'd had a cold for a while, but I hadn't the time for a cold and so it didn't exist – until it did. Until my work was done and my isolation realised itself in my head. With no more work to do and lack of social interactions to keep me busy, I let myself slump.\nI took Thursday and Friday to myself. I let my heart calm down. I let my head clear all the noise. As I said before, I'd usually have worked through a cold like this, but I decided I deserved some time – I needed a mental recovery. I let go and got used to moving slow.\nSo. It's Sunday. I'm looking out of my window and I'm calm. It's a beautiful blue sky out there, I can feel the warmth on my skin through the window. It is only this morning that I realise how much I needed to catch my breath. In this alternate reality there really is more time, time has warped, and as horrible and weird as things are, I'm taking this time for myself.\nI'm taking more time to make my coffee. I'm finally reading my amassed book collection. I'm sitting and listening to music while doing nothing else. Regular me would be having fits at the lack of productivity. But these aren't regular times, are they?\nI'm sending out positive thoughts to all the people I know and to those I don't. I hope on this Mother's Day you can find, at least a moment of, peace in this solitude. Feel free to reach out to me on Twitter, if it helps. I'll be right here. I've got plenty of time.\nI'm moving slow."}},{"title":{"html":"Introducing trunk-based development and it's relationship to the widely used practice continuous integration. If you do continuous integration, you should be doing trunk-based development.
","plain":"Introducing trunk-based development and it's relationship to the widely used practice continuous integration. If you do continuous integration, you should be doing trunk-based development."},"slug":"an-introduction-to-trunk-based-development","tags":["Software development","Trunk-based development"],"publishedAt":{"pretty":"31st December 2019","iso":"2019-12-31T00:00:00.000+00:00"},"content":{"html":"Introducing trunk-based development and it's relationship to the widely used practice continuous integration. If you do continuous integration, you should be doing trunk-based development.
\nWhile you may not have heard of trunk-based development, you most likely have heard of continuous integration. Trunk-based development builds on continuous integration and brings even more benefits to your development teams.
\nContinuous integration is the development practice of integrating your developers work continuously, usually multiple times a day.
\nThere is an expectation that if you practice continuous integration you run automated tests against any new changes as they are merged. This helps teams deliver value rapidly as they push changes multiple times a day with the reassurance that their changes haven't broken any existing functionality.
\nBy having automated checks and encouraging teams to integrate their changes multiple times a day, you reduce the likelihood of issues arising from developers working in isolation for long periods of time introducing bugs.
\nThoughtWorks and Martin Fowler both have great primers on the concept of continuous integration.
\nTrunk-based development is effectively the same as continuous integration with the additional proviso that you merge your code into a single branch on a regular basis. The branch nowadays is called master trunk or mainline were popular in the past before the days of git hence the phrase trunk-based development.
\nYou can see that trunk-based development and continuous integration go hand in hand. The only difference is that your team have committed to continuously integrating into a trunk branch, rather than say into a series of branches, like an alternative to trunk-based development such as GitFlow might suggest.
\nWarning: Avoid using GitFlow. Point your teams to trunk-based development resources if you find them using it.
\n\n\nBranches create distance between developers and we do not want that\n
\n
— Frank Compagner, Guerrilla Games
If you don't currently practice trunk-based development you will likely be suffering from a number of problems:
\nBy adopting trunk-based development, you will begin to break down these problems, reduce bugs in your systems and begin your journey towards more frequent and less risky releases. The aim should be that your teams eventually get to a point that they can release multiple times a day into production.
\nWhere people often get confused when adopting trunk-based development is that they think it is mutually exclusive to practices such as peer reviews and branching. That's not true and in fact most teams who practice trunk-based development still use branches on a daily basis, it just happens that they only live for short amounts of time.
\nThere's a great explanation on what trunk-based development is from 2013 by Paul Hammant.
\n\n\nOK, so GitHub pioneered the pull-request as a development workflow. This is quite compatible with TBD [trunk-based development], in that a feature/task is marshaled in a place that is not yet on the trunk/master but can be quickly.
\n
As Paul Hammant explains, the practice of Pull Requests and trunk-based development are quite compatible. In fact, you get the additional benefit of running your continuous integration suite against your Pull Request branch before merging into master which provides an additional safety check and audit trail for your teams.
\n\n\nI’m saying nothing about what developers do on their own workstations by way of ‘local’ branching to suit their hour by hour activities.
\n
In many ways, pushing a branch to GitHub (or GitLabs or BitBucket) is just a stepping stone to merging your changes into your trunk. That's no bad thing as that stepping stone provides an audit trail of change as well as providing confidence through continuous integration checks pre-merge.
\nThe aim of trunk-based development is to avoid the pains of long-lived branches, not to avoid branching altogether.
\nMy recommendation would be to start using trunk-based development on a single repository for an iteration (known as a sprint in Scrum) or two and see how it goes.
\nIt can take a bit of time for your team to begin to chunk their work so they can merge multiple times a days. Where they may have previously opened a Pull Request once or twice an iteration, they will now be expected to do so multiple times a day. This means adjusting their units of work to be smaller. You need to tackle this as a team together.
\nDon't worry, smaller change sets mean easier peer reviews and less risky deploys. If your teams can begin chunking their work differently, they will then quickly feel a reduction in friction of development.
\nGood luck!
","plain":"Introducing trunk-based development and it's relationship to the widely used practice continuous integration. If you do continuous integration, you should be doing trunk-based development.\nWhile you may not have heard of trunk-based development, you most likely have heard of continuous integration. Trunk-based development builds on continuous integration and brings even more benefits to your development teams.\nContinuous integration\nContinuous integration is the development practice of integrating your developers work continuously, usually multiple times a day.\nThere is an expectation that if you practice continuous integration you run automated tests against any new changes as they are merged. This helps teams deliver value rapidly as they push changes multiple times a day with the reassurance that their changes haven't broken any existing functionality.\nBy having automated checks and encouraging teams to integrate their changes multiple times a day, you reduce the likelihood of issues arising from developers working in isolation for long periods of time introducing bugs.\nThoughtWorks and Martin Fowler both have great primers on the concept of continuous integration.\nTrunk-based development\nTrunk-based development is effectively the same as continuous integration with the additional proviso that you merge your code into a single branch on a regular basis. The branch nowadays is called master trunk or mainline were popular in the past before the days of git hence the phrase trunk-based development.\nYou can see that trunk-based development and continuous integration go hand in hand. The only difference is that your team have committed to continuously integrating into a trunk branch, rather than say into a series of branches, like an alternative to trunk-based development such as GitFlow might suggest.\nWarning: Avoid using GitFlow. Point your teams to trunk-based development resources if you find them using it.\nWhy adopt trunk-based development?\n\nBranches create distance between developers and we do not want that\n— Frank Compagner, Guerrilla Games\n\nIf you don't currently practice trunk-based development you will likely be suffering from a number of problems:\n\nLong-lived branches mean your developers will have a greatly increased chance of introducing bugs due to diverging code\nBureaucracy of multi-branch strategies, of which GitFlow is the worst, will likely be hindering your teams ability to deliver fast\nMulti-branch strategies often come with hierarchy or gatekeepers which means your organisation doesn't trust their developers to deploy their own code\nPeer reviews are tedious as change sets are often hundreds if not thousands of lines of code long which either means reviews are rushed or take forever\nYour developers likely experience a crunch time at the end of iterations or whenever they eventually merge their branches together for a release which likely means long hours and more bugs\n\nBy adopting trunk-based development, you will begin to break down these problems, reduce bugs in your systems and begin your journey towards more frequent and less risky releases. The aim should be that your teams eventually get to a point that they can release multiple times a day into production.\nWhat trunk-based development isn't\nWhere people often get confused when adopting trunk-based development is that they think it is mutually exclusive to practices such as peer reviews and branching. That's not true and in fact most teams who practice trunk-based development still use branches on a daily basis, it just happens that they only live for short amounts of time.\nThere's a great explanation on what trunk-based development is from 2013 by Paul Hammant.\n\nOK, so GitHub pioneered the pull-request as a development workflow. This is quite compatible with TBD [trunk-based development], in that a feature/task is marshaled in a place that is not yet on the trunk/master but can be quickly.\n\nAs Paul Hammant explains, the practice of Pull Requests and trunk-based development are quite compatible. In fact, you get the additional benefit of running your continuous integration suite against your Pull Request branch before merging into master which provides an additional safety check and audit trail for your teams.\n\nI’m saying nothing about what developers do on their own workstations by way of ‘local’ branching to suit their hour by hour activities.\n\nIn many ways, pushing a branch to GitHub (or GitLabs or BitBucket) is just a stepping stone to merging your changes into your trunk. That's no bad thing as that stepping stone provides an audit trail of change as well as providing confidence through continuous integration checks pre-merge.\nThe aim of trunk-based development is to avoid the pains of long-lived branches, not to avoid branching altogether.\nWhere to start\n\nAdopt continuous integration if you haven't already\nDecide on a team and repository to trial trunk-based development for an iteration or two\nHave you and your collegues read the manual on trunk-based development\nDiscuss as a team how you might begin chunking your work into smaller change sets, and how you plan to review each others work more frequently\nReview in your team retrospectives and evolve from there\n\nMy recommendation would be to start using trunk-based development on a single repository for an iteration (known as a sprint in Scrum) or two and see how it goes.\nIt can take a bit of time for your team to begin to chunk their work so they can merge multiple times a days. Where they may have previously opened a Pull Request once or twice an iteration, they will now be expected to do so multiple times a day. This means adjusting their units of work to be smaller. You need to tackle this as a team together.\nDon't worry, smaller change sets mean easier peer reviews and less risky deploys. If your teams can begin chunking their work differently, they will then quickly feel a reduction in friction of development.\nGood luck!"}},{"title":{"html":"In which I ask questions about the ways that a team might approach Clean Architecture in a way you can still benefit from the productivity of a framework.
","plain":"In which I ask questions about the ways that a team might approach Clean Architecture in a way you can still benefit from the productivity of a framework."},"slug":"ways-of-approaching-clean-architecture","tags":["Clean Architecture"],"publishedAt":{"pretty":"20th December 2019","iso":"2019-12-20T00:00:00.000+00:00"},"content":{"html":"In which I ask questions about the ways that a team might approach Clean Architecture in a way you can still benefit from the productivity of a framework.
\nMany approach Clean Architecture as an all or nothing approach. You are either architecting your system(s) that way, or you aren't. I agree up to a certain point in that you can quickly stop benefitting from Clean Architecture if you ignore it's boundaries which can particularly be an issue in dynamic languages like Ruby.
\nWhere I disagree is in the opposite direction when Clean Architecture can push your domain so far from the delivery mechanism, say the Ruby on Rails framework, to the point you can no longer benefit from the Rails ecosystem. Worse still I see teams starting without a framework altogether, taking a HTTP routing library such as Sinatra, and then over time end up adding framework features into it like ActiveRecord but gain none of the benefits of Rails development speed and gem ecosystem.
\nAn example I see in the wild is where you can no longer benefit from Rails view helpers like form_with
because you either used Sinatra, or you've hidden your ActiveRecord or database DAOs so far away from your controllers you can't inject them into your views to use with form_with
.
There are bigger examples too, such as losing the ability use gems such as Devise for authentication in your application, and not being able to use Paperclip or these days ActiveStorage, and creating your own queuing drivers rather than using ActiveJob. The list goes on, and it drives me crazy because folks don't know what they're missing in productivity! More experienced with Clean Architecture and generalist programmer practices we miss the community-specific learnings and wisdom that may exist within a particular ecosystem like Rails.
\nTo quote myself or maybe Danny Dyer, "it does my absolute nut in" to see such learnings from a community overlooked. It's no ones fault of course, empathy for all, but it seems like institutional memory loss in my mind. What a waste.
\nI believe there is a path that sits between the all or nothing approaches of both pure framework versus pure Clean Architecture. There has to be. I ask myself a lot of questions around this, I certainly don't have all the answers.
\nHow might you approach Clean Architecture progressively where you use it in particularly complex areas of your application? Should ATDD start directly in your lib/
far away from your app/
in a Rails application? Or could it start with a feature test, and then a request test, and then dive into a use case unit test? How fragile would your application be in that scenario?
How might you adopt Clean Architecture in a way that you can benefit from Devise, ActiveStorage and ActiveJob?
\nHow do you maintain a discipline to know when you move from Rails CRUD into Clean Architecture? Or more still, how do you even know when to use one or the other?
\nThese are ongoing questions I ask myself, and our team at Made Tech. Our Rails Working Group is actively looking at how we can bridge the gap, engaging Clean Architecture with joy as Rails developers, and engaging with Rails community wisdom as generalist XP professionals.
\nStay tuned for more! I also recently wrote about why you might want to adopt Clean Architecture in a Rails application.
","plain":"In which I ask questions about the ways that a team might approach Clean Architecture in a way you can still benefit from the productivity of a framework.\nMany approach Clean Architecture as an all or nothing approach. You are either architecting your system(s) that way, or you aren't. I agree up to a certain point in that you can quickly stop benefitting from Clean Architecture if you ignore it's boundaries which can particularly be an issue in dynamic languages like Ruby.\nWhere I disagree is in the opposite direction when Clean Architecture can push your domain so far from the delivery mechanism, say the Ruby on Rails framework, to the point you can no longer benefit from the Rails ecosystem. Worse still I see teams starting without a framework altogether, taking a HTTP routing library such as Sinatra, and then over time end up adding framework features into it like ActiveRecord but gain none of the benefits of Rails development speed and gem ecosystem.\nThrowing the framework out with the bath water\nAn example I see in the wild is where you can no longer benefit from Rails view helpers like form_with because you either used Sinatra, or you've hidden your ActiveRecord or database DAOs so far away from your controllers you can't inject them into your views to use with form_with.\nThere are bigger examples too, such as losing the ability use gems such as Devise for authentication in your application, and not being able to use Paperclip or these days ActiveStorage, and creating your own queuing drivers rather than using ActiveJob. The list goes on, and it drives me crazy because folks don't know what they're missing in productivity! More experienced with Clean Architecture and generalist programmer practices we miss the community-specific learnings and wisdom that may exist within a particular ecosystem like Rails.\nTo quote myself or maybe Danny Dyer, "it does my absolute nut in" to see such learnings from a community overlooked. It's no ones fault of course, empathy for all, but it seems like institutional memory loss in my mind. What a waste.\nA middle way\nI believe there is a path that sits between the all or nothing approaches of both pure framework versus pure Clean Architecture. There has to be. I ask myself a lot of questions around this, I certainly don't have all the answers.\nHow might you approach Clean Architecture progressively where you use it in particularly complex areas of your application? Should ATDD start directly in your lib/ far away from your app/ in a Rails application? Or could it start with a feature test, and then a request test, and then dive into a use case unit test? How fragile would your application be in that scenario?\nHow might you adopt Clean Architecture in a way that you can benefit from Devise, ActiveStorage and ActiveJob?\nDiscipline required\nHow do you maintain a discipline to know when you move from Rails CRUD into Clean Architecture? Or more still, how do you even know when to use one or the other?\nThese are ongoing questions I ask myself, and our team at Made Tech. Our Rails Working Group is actively looking at how we can bridge the gap, engaging Clean Architecture with joy as Rails developers, and engaging with Rails community wisdom as generalist XP professionals.\nStay tuned for more! I also recently wrote about why you might want to adopt Clean Architecture in a Rails application."}},{"title":{"html":"On the reasoning why and how you might use a Clean Architecture approach in Rails applications. Warning: it's nuanced and full of compromise.
","plain":"On the reasoning why and how you might use a Clean Architecture approach in Rails applications. Warning: it's nuanced and full of compromise."},"slug":"why-take-a-clean-architecture-approach-to-rails","tags":["Clean Architecture","Ruby on Rails"],"publishedAt":{"pretty":"18th December 2019","iso":"2019-12-18T00:00:00.000+00:00"},"content":{"html":"On the reasoning why and how you might use a Clean Architecture approach in Rails applications. Warning: it's nuanced and full of compromise.
\nA few years ago I was introduced to the concept of Clean Architecture. When I saw it something clicked in my brain. Before Clean Architecture I chose to remove logic from controllers and models of applications I built into Plain Old Ruby Objects (POROs). While I maintained some consistency across projects, my approach to architecting and testing these POROs varied more often than not and it often felt like I was reinventing the wheel.
\nThe POROs were usually placed into a directory like app/services
and called Service Objects or just Services. The Service Object pattern involves the creation classes that represent business logic or behaviour that would otherwise sit too close to the the database in an ActiveRecord model or in the controller.
After having a little google I found references going as far back as 2012 including a blog post on "the service layer [that] lies between controllers and models" and a RailsCasts video on using Service Objects in Rails.
\nThis pattern is probably the most common Rails pattern for keeping controllers and models skinny though it remains decidedly non-standard by Rails creator DHH.
\n\n\nIt's like we have a J2EE renaissance fair going on at the moment. Digging up every discredited pattern of complexity to do a reunion tour.
\n
Haha, I laughed at that one. He was still hurting from his time as a Java developer when he tweeted that I suspect.
\n\n\nIt's given birth to some truly horrendous monstrosities of architecture. A dense jungle of service objects, command patterns, and worse.
\n
Yup, that is the author of Rails' view of service objects. He don't like 'em much does he? He also doesn't like TDD. YMMV.
\nI think it's okay for the Rails author to not support a common idiom within the community, his mileage did vary. We have to have empathy for differing views in a complex world.
\nDHH was right about one thing though: lumping the Service Object and Command pattern together. You'll find the most commonly touted approach to services in Rails is to name them as verbs and have a common public method like #run
, #call
, #execute
or #exec
. This isn't the Service Object pattern in reality – it's the Command pattern as mentioned by Martin Fowler.
Okay so how about Clean Architecture? I've certainly seen "by the book" Clean Architecture applied to Rails applications. Some teams love it! In this pure adoption you will usually find application and enterprise business rules in the lib/
directory of a Rails app rather than app/
. You will also find directories named domain/
or entities/
, interactions/
or use_cases/
, and gateways/
or adapters/
in lib/
.
I've also seen teams of Ruby on Rails developers completely reject Clean Architecture. They already have their idioms for this problem! Something didn't click in their brains like it did in mine. This was clearly a stretch too far from the Rails way.
\nAgain, empathy for all, everyone can have view that differ. For me I found the concept of breaking down business rules into entities, use cases and adapters a fairer split of labour than simply lifting and shifting controller code into a single service object. That said, in many cases where an application is made up of (mostly) simple CRUD, this extra layer of complexity wasn't needed at all.
\nThe world isn't as simple as always using one pattern or another. Sure it's easier to teach one way, rather than the nuances of many, but knowing and being able to argue the tradeoffs with yourself is the real wisdom.
\nThis world of nuance has led me to take a more compromising view on the adoption of Clean Architecture in the Rails world – when it makes sense to use Clean Architecture at all.
\nA more friendly approach for Rails developers is to use more Rails idioms. There are plenty of idioms that match up with the entity, use case and gateway classes from Clean Architecture.
\nThe Services convention in Rails clearly matches that of use case classes that represent actions a user can make on a system. This especially rings true for me when taking the Command pattern approach to Services which is the same convention I see used in Clean Architecture use cases. Just put your use cases in app/services
!
Entities are use to represent enterprise-level business rules. Code that will likely be shared across use cases and business functions. These need only introducing when you want to share behaviour between use cases, or when you want to abstract your use cases from the database and the ActiveRecord pattern. There are plenty of libraries including ActiveModel that provide helpers to make the definition of these classes easier. You may need to debate with your team whether these can sit beside database representations in app/models
or whether you split them out into app/domains
.
Adapters are already known to Rails developers as the Adapter pattern is rife there too. Plenty of applications have adapters for connecting to external APIs or databases. These can go in app/adapters
.
So you see, Clean Architecture isn't actually a million miles away from being idiomatic to Rails developers. Sure it's full of compromise, and is probably only worth it when your application is complex enough. We have to learn these nuances and know when to compromise to deliver value for our users and organisations.
","plain":"On the reasoning why and how you might use a Clean Architecture approach in Rails applications. Warning: it's nuanced and full of compromise.\nA few years ago I was introduced to the concept of Clean Architecture. When I saw it something clicked in my brain. Before Clean Architecture I chose to remove logic from controllers and models of applications I built into Plain Old Ruby Objects (POROs). While I maintained some consistency across projects, my approach to architecting and testing these POROs varied more often than not and it often felt like I was reinventing the wheel.\nBefore Clean Architecture: Services\nThe POROs were usually placed into a directory like app/services and called Service Objects or just Services. The Service Object pattern involves the creation classes that represent business logic or behaviour that would otherwise sit too close to the the database in an ActiveRecord model or in the controller.\nAfter having a little google I found references going as far back as 2012 including a blog post on "the service layer [that] lies between controllers and models" and a RailsCasts video on using Service Objects in Rails.\nThis pattern is probably the most common Rails pattern for keeping controllers and models skinny though it remains decidedly non-standard by Rails creator DHH.\n\nIt's like we have a J2EE renaissance fair going on at the moment. Digging up every discredited pattern of complexity to do a reunion tour.\n\nHaha, I laughed at that one. He was still hurting from his time as a Java developer when he tweeted that I suspect.\n\nIt's given birth to some truly horrendous monstrosities of architecture. A dense jungle of service objects, command patterns, and worse.\n\nYup, that is the author of Rails' view of service objects. He don't like 'em much does he? He also doesn't like TDD. YMMV.\nI think it's okay for the Rails author to not support a common idiom within the community, his mileage did vary. We have to have empathy for differing views in a complex world.\nDHH was right about one thing though: lumping the Service Object and Command pattern together. You'll find the most commonly touted approach to services in Rails is to name them as verbs and have a common public method like #run, #call, #execute or #exec. This isn't the Service Object pattern in reality – it's the Command pattern as mentioned by Martin Fowler.\nIntroducing Clean Architecture: Even further from pure-Rails\nOkay so how about Clean Architecture? I've certainly seen "by the book" Clean Architecture applied to Rails applications. Some teams love it! In this pure adoption you will usually find application and enterprise business rules in the lib/ directory of a Rails app rather than app/. You will also find directories named domain/ or entities/, interactions/ or use_cases/, and gateways/ or adapters/ in lib/.\nI've also seen teams of Ruby on Rails developers completely reject Clean Architecture. They already have their idioms for this problem! Something didn't click in their brains like it did in mine. This was clearly a stretch too far from the Rails way.\nAgain, empathy for all, everyone can have view that differ. For me I found the concept of breaking down business rules into entities, use cases and adapters a fairer split of labour than simply lifting and shifting controller code into a single service object. That said, in many cases where an application is made up of (mostly) simple CRUD, this extra layer of complexity wasn't needed at all.\nThe world isn't as simple as always using one pattern or another. Sure it's easier to teach one way, rather than the nuances of many, but knowing and being able to argue the tradeoffs with yourself is the real wisdom.\nIntroducing Clean Architecture: Services (and other friends)\nThis world of nuance has led me to take a more compromising view on the adoption of Clean Architecture in the Rails world – when it makes sense to use Clean Architecture at all.\nA more friendly approach for Rails developers is to use more Rails idioms. There are plenty of idioms that match up with the entity, use case and gateway classes from Clean Architecture.\nThe Services convention in Rails clearly matches that of use case classes that represent actions a user can make on a system. This especially rings true for me when taking the Command pattern approach to Services which is the same convention I see used in Clean Architecture use cases. Just put your use cases in app/services!\nEntities are use to represent enterprise-level business rules. Code that will likely be shared across use cases and business functions. These need only introducing when you want to share behaviour between use cases, or when you want to abstract your use cases from the database and the ActiveRecord pattern. There are plenty of libraries including ActiveModel that provide helpers to make the definition of these classes easier. You may need to debate with your team whether these can sit beside database representations in app/models or whether you split them out into app/domains.\nAdapters are already known to Rails developers as the Adapter pattern is rife there too. Plenty of applications have adapters for connecting to external APIs or databases. These can go in app/adapters.\nSo you see, Clean Architecture isn't actually a million miles away from being idiomatic to Rails developers. Sure it's full of compromise, and is probably only worth it when your application is complex enough. We have to learn these nuances and know when to compromise to deliver value for our users and organisations."}},{"title":{"html":"In which I try to untangle the differences in Clean Architecture implementations.
","plain":"In which I try to untangle the differences in Clean Architecture implementations."},"slug":"nuances-in-clean-architecture","tags":["Clean Architecture"],"publishedAt":{"pretty":"14th December 2019","iso":"2019-12-14T00:00:00.000+00:00"},"content":{"html":"In which I try to untangle the differences in Clean Architecture implementations.
\nClean Architecture has seen some popularity as an approach to architecting complex code bases but inevitably has been interpreted in many and often conflicting ways. I wanted to get my head around this particularly as I've been refreshing my Java recently and have come across a number of differences in Clean Architecture implementations. This article is my attempt to list a number of differences I've spotted in the wild.
\nThese differences aren't in any particular order and certainly aren't numbered. What? Did you think this was a listicle?
\nSo if you learned Clean Architecture from it's creator then you will likely be familiar with the idea of presenters in the overall control flow. The idea being while a framework controller will construct a use case request model to inject into a use case, it does not receive the use case response model at all. Instead, a presenter already injected into the use case is given a response model, at a time the use case determines.
\nIn this directors-cut interpretation of Clean Architecture, the controller is ignorant of the shape of a response model and is never tempted to even peak into it. The controller simply makes a request and then takes it easy.
\nWhen researching this I came across a brilliant question and response on StackExchange that summarises the "official" stance on why injected presenters should be used over returning response model.
\nIn response to the question author's suggestion that it's more favourable for the controller to marry the use case and a presenter, rather than the use case knowing about the presenter directly, the responder quickly asserts:
\n\n\nThat's certainly not Clean, Onion, or Hexagonal Architecture.
\n
I like the passion for defending a canonical interpretation but I'm not really sure how it's helpful to someone trying to make sense of it all.
\nThe responder then goes onto suggesting one should read up on the Dependency Inversion Principle and Command Query Responsibility Segregation without offering any real explanation of their importance.
\n\n\nThe problem here is now whatever knows how to ask for the data has to also be the thing that accepts the data.
\n
And why is that a problem I asked myself?
\n\n\nYes! Telling, not asking, will help keep this object oriented rather than procedural.
\n
And why is OO better than procedural?
\n\n\nThe point of making sure the inner layers don't know about the outer layers is that we can remove, replace, or refactor the outer layers confident that doing so wont break anything in the inner layers. What they don't know about won't hurt them. If we can do that we can change the outer ones to whatever we want.
\n
This made the most sense to me, "what they don't know about won't hurt them". I suppose the argument is the controller only cares about firing off the request and never handles the response, it does one job and is therefore simpler and easier to maintain.
\nMy introduction to Clean Architecture was not from reading a book or an "official" blog post. A colleague of mine ran a showcase and later coached me on their interpretation. This interpretation completely missed the idea of presenters altogether, or at least if they did talk about presenters, I didn't retain that information.
\nIt seems many others have also overlooked this detail, blissfully unaware of the unorthodoxy of their approach. I suppose "what they don't know about won't hurt them" too?
\nIn a C# introduction of Clean Architecture the author happily introduces their readers to the concept of a use case receiving a request and returning a response. What filth. Their example also included a presenter, however it was quite separate from the use case, they passed the response from the use case into the presenter within the console runner. Scandal!
\nTo be honest, going back to the StackExchange response on this topic, I like the summary at the end of the lengthy answer:
\n\n\nAnything that works is viable. But I wouldn't say that the second option you presented faithfully follows Clean Architecture. It might be something that works. But it's not what Clean Architecture asks for.
\n
What works, works, right?
\nRequest and response models are the layers of abstraction that separate your business logic from the outside world. They translate a request from a delivery mechanism or framework into language a use case understands and then a response is constructed and interpreted by a presentation layer. At no point, is a domain entity exposed.
\nI asked myself a question early on in my exposure to Clean Architecture. If my response looks like my domain entity why am I mapping the domain into a response? It's unnecessary complexity and indirection right? I'm not the only one, a questioner on StackOverflow asked why could the domain entity not be used as the request? Clever, I like the way your brain works internet person!
\nSadly, enforcers of the one true clean way responded crushing the questioners spirit and mine in one fell swoop suggesting Clean Architecture could never be so fragile:
\n\n\nHow could Clean Architecture possibly force rigidity and fragility? Defining an architecture is all about: how to take care widely of fundamental OOP principles such as SOLID and others…
\n
Aight. You're entitled to strongly held beliefs I suppose. The responder then bashes more wikipedia links on the question authors head, this time to the Law of Demeter. Enforcers sure do like to call upon theory.
\nLuckily, we weren't the only ones. A recent article introducing an approach to Clean Architecture for Java 11 described use cases directly returning domain objects.
\nWhat do I really think? By doing this you are certainly introducing fragility to your code base if you let your use cases expose domain entities to the wider world. Is that a problem? It depends on your context. I suppose it also depends on your willingness and discipline to add new layers of indirection when your application calls for it.
\nI can certainly see an argument for just following the rulebook, it's simpler to teach and simpler to be consistent. That said, it's clearly a tradeoff for simpler contexts.
\nThe command pattern states that you expose a method that does stuff. Usually you will find the method called execute()
, exec()
or run()
. This means that you can describe your application in a series of commands, all of which are called in the same way but each doing something different under the hood.
You can see why the command pattern translates quite well to the use case pattern, particularly if you're making sure your use cases do one thing.
\nThat said, I've found no orthodox view on the use of the command pattern. In fact I've found a good mix of examples where use cases do more than one thing.
\nIn the Slalom post previously mentioned, the example has multiple public methods in the FindUser
use case, both findById()
and findAllUsers()
rather than a single execute()
.
In another example, this time in golang, I found the use cases provided a number of public methods that interact on the concept of a user object. For me, this means you end up defining use cases around nouns rather than verbs, which means they can end up doing too much. The issue with use cases doing too much is that they are harder to maintain because they are harder to understand. It also means that your directory of use cases no longer describe the things a user can do with the application, which for me was one of the selling points of Clean Architecture over MVC.
\nYou certainly do find the command pattern in use for use cases though. The C# article uses a Response Handle(Request)
method.
At Made Tech we certainly recommend the command pattern and you can see it in use in many of our public sector work streams.
\nI like the idea of a use case being linked to a single action. I like the way it forces you to decompose your application into small enough chunks and your directory structure describes what your application does. Your mileage may vary however.
\nTalking of directory structures there are many interpretations and also how far you go to separate your delivery mechanism or framework from your domain and use cases.
\nThe pattern we most often use at Made Tech is making sure we have separated our Clean Architecture code from our delivery mechanism. A common case for us is making sure Rails and our Clean Architecture are separated by keeping our Rails code in app/
and our use cases, domains and gateways in lib/
. The GovWifi project is a clear example of this.
I've noticed elsewhere, particularly in languages that like to structure code as packages like Java and C#, you find domain, use cases, adapters and frameworks all in separate packages. The Slalom post certainly favoured this approach.
\nYou then find everything in between. I can see the advantage of using enforceable package boundaries particularly with tools like Jigsaw in Java that allow you to keep implementation details hidden from the various layers of your application. This is much harder in Ruby and other dynamic languages where you have a vague idea of namespacing but everything ultimately runs in a global space. At this point the directory structure is your only defence which is why you see the separation of code into app/
and lib/
in Rails applications.
As I've been writing this article I've noticed all kinds of interchangeable language. Instead of discussing the naming of things, as the old adage suggests its one of our hardest problems as coders, I'll provide a few lists of synonyms instead and I'll let you interpret them as you will.
\nFrameworks and drivers
\nFrameworks and drivers represent the outside world. They are the outer layer.
\nTypes of frameworks and drivers: Delivery Mechanism, Framework, Database, UI, Web, Program, Console Runner
\nInterface Adapters
\nThe implementation detail of connecting your business rules with the outside world.
\nTypes of Interface Adapters: Gateways, Repositories, Presenters, Controllers
\nApplication Business Rules
\nThe code that describes what your application does.
\nTypes of Application Business Rules: Use Cases, Interactors, Actions
\nThis layer also provides Ports, also know as Input/Ouputs Ports or Request/Response Models. These are the inputs and outputs of use cases.
\nEnterprise Business Rules
\nThe code that represents the nouns of your organisation.
\nTypes of Enterprise Business Rules: Entities, Domains, Models, Records
\nI'm not sure what to make of it all. Clearly there are different strokes for different folks. People will interpret however they will. The best way you can equip yourself for such a non-uniform universe is to understand the variations in the use of Clean Architecture and begin to reason when certain approaches may work over others.
\nYou'll never 100% get it. People will disagree with you. That's okay, haters gonna hate. Just make sure you are empathetic to others and be open minded.
","plain":"In which I try to untangle the differences in Clean Architecture implementations.\nClean Architecture has seen some popularity as an approach to architecting complex code bases but inevitably has been interpreted in many and often conflicting ways. I wanted to get my head around this particularly as I've been refreshing my Java recently and have come across a number of differences in Clean Architecture implementations. This article is my attempt to list a number of differences I've spotted in the wild.\nThese differences aren't in any particular order and certainly aren't numbered. What? Did you think this was a listicle?\nPresenters versus returned values\nSo if you learned Clean Architecture from it's creator then you will likely be familiar with the idea of presenters in the overall control flow. The idea being while a framework controller will construct a use case request model to inject into a use case, it does not receive the use case response model at all. Instead, a presenter already injected into the use case is given a response model, at a time the use case determines.\nIn this directors-cut interpretation of Clean Architecture, the controller is ignorant of the shape of a response model and is never tempted to even peak into it. The controller simply makes a request and then takes it easy.\n\nWhen researching this I came across a brilliant question and response on StackExchange that summarises the "official" stance on why injected presenters should be used over returning response model.\nIn response to the question author's suggestion that it's more favourable for the controller to marry the use case and a presenter, rather than the use case knowing about the presenter directly, the responder quickly asserts:\n\nThat's certainly not Clean, Onion, or Hexagonal Architecture.\n\nI like the passion for defending a canonical interpretation but I'm not really sure how it's helpful to someone trying to make sense of it all.\nThe responder then goes onto suggesting one should read up on the Dependency Inversion Principle and Command Query Responsibility Segregation without offering any real explanation of their importance.\n\nThe problem here is now whatever knows how to ask for the data has to also be the thing that accepts the data.\n\nAnd why is that a problem I asked myself?\n\nYes! Telling, not asking, will help keep this object oriented rather than procedural.\n\nAnd why is OO better than procedural?\n\nThe point of making sure the inner layers don't know about the outer layers is that we can remove, replace, or refactor the outer layers confident that doing so wont break anything in the inner layers. What they don't know about won't hurt them. If we can do that we can change the outer ones to whatever we want.\n\nThis made the most sense to me, "what they don't know about won't hurt them". I suppose the argument is the controller only cares about firing off the request and never handles the response, it does one job and is therefore simpler and easier to maintain.\nMy introduction to Clean Architecture was not from reading a book or an "official" blog post. A colleague of mine ran a showcase and later coached me on their interpretation. This interpretation completely missed the idea of presenters altogether, or at least if they did talk about presenters, I didn't retain that information.\nIt seems many others have also overlooked this detail, blissfully unaware of the unorthodoxy of their approach. I suppose "what they don't know about won't hurt them" too?\nIn a C# introduction of Clean Architecture the author happily introduces their readers to the concept of a use case receiving a request and returning a response. What filth. Their example also included a presenter, however it was quite separate from the use case, they passed the response from the use case into the presenter within the console runner. Scandal!\nTo be honest, going back to the StackExchange response on this topic, I like the summary at the end of the lengthy answer:\n\nAnything that works is viable. But I wouldn't say that the second option you presented faithfully follows Clean Architecture. It might be something that works. But it's not what Clean Architecture asks for.\n\nWhat works, works, right?\nUse of request and response models\nRequest and response models are the layers of abstraction that separate your business logic from the outside world. They translate a request from a delivery mechanism or framework into language a use case understands and then a response is constructed and interpreted by a presentation layer. At no point, is a domain entity exposed.\nI asked myself a question early on in my exposure to Clean Architecture. If my response looks like my domain entity why am I mapping the domain into a response? It's unnecessary complexity and indirection right? I'm not the only one, a questioner on StackOverflow asked why could the domain entity not be used as the request? Clever, I like the way your brain works internet person!\nSadly, enforcers of the one true clean way responded crushing the questioners spirit and mine in one fell swoop suggesting Clean Architecture could never be so fragile:\n\nHow could Clean Architecture possibly force rigidity and fragility? Defining an architecture is all about: how to take care widely of fundamental OOP principles such as SOLID and others…\n\nAight. You're entitled to strongly held beliefs I suppose. The responder then bashes more wikipedia links on the question authors head, this time to the Law of Demeter. Enforcers sure do like to call upon theory.\nLuckily, we weren't the only ones. A recent article introducing an approach to Clean Architecture for Java 11 described use cases directly returning domain objects.\nWhat do I really think? By doing this you are certainly introducing fragility to your code base if you let your use cases expose domain entities to the wider world. Is that a problem? It depends on your context. I suppose it also depends on your willingness and discipline to add new layers of indirection when your application calls for it.\nI can certainly see an argument for just following the rulebook, it's simpler to teach and simpler to be consistent. That said, it's clearly a tradeoff for simpler contexts.\nUse of the command pattern\nThe command pattern states that you expose a method that does stuff. Usually you will find the method called execute(), exec() or run(). This means that you can describe your application in a series of commands, all of which are called in the same way but each doing something different under the hood.\nYou can see why the command pattern translates quite well to the use case pattern, particularly if you're making sure your use cases do one thing.\nThat said, I've found no orthodox view on the use of the command pattern. In fact I've found a good mix of examples where use cases do more than one thing.\nIn the Slalom post previously mentioned, the example has multiple public methods in the FindUser use case, both findById() and findAllUsers() rather than a single execute().\nIn another example, this time in golang, I found the use cases provided a number of public methods that interact on the concept of a user object. For me, this means you end up defining use cases around nouns rather than verbs, which means they can end up doing too much. The issue with use cases doing too much is that they are harder to maintain because they are harder to understand. It also means that your directory of use cases no longer describe the things a user can do with the application, which for me was one of the selling points of Clean Architecture over MVC.\nYou certainly do find the command pattern in use for use cases though. The C# article uses a Response Handle(Request) method.\nAt Made Tech we certainly recommend the command pattern and you can see it in use in many of our public sector work streams.\nI like the idea of a use case being linked to a single action. I like the way it forces you to decompose your application into small enough chunks and your directory structure describes what your application does. Your mileage may vary however.\nDirectory structure and packages\nTalking of directory structures there are many interpretations and also how far you go to separate your delivery mechanism or framework from your domain and use cases.\nThe pattern we most often use at Made Tech is making sure we have separated our Clean Architecture code from our delivery mechanism. A common case for us is making sure Rails and our Clean Architecture are separated by keeping our Rails code in app/ and our use cases, domains and gateways in lib/. The GovWifi project is a clear example of this.\nI've noticed elsewhere, particularly in languages that like to structure code as packages like Java and C#, you find domain, use cases, adapters and frameworks all in separate packages. The Slalom post certainly favoured this approach.\nYou then find everything in between. I can see the advantage of using enforceable package boundaries particularly with tools like Jigsaw in Java that allow you to keep implementation details hidden from the various layers of your application. This is much harder in Ruby and other dynamic languages where you have a vague idea of namespacing but everything ultimately runs in a global space. At this point the directory structure is your only defence which is why you see the separation of code into app/ and lib/ in Rails applications.\nThe naming of things\nAs I've been writing this article I've noticed all kinds of interchangeable language. Instead of discussing the naming of things, as the old adage suggests its one of our hardest problems as coders, I'll provide a few lists of synonyms instead and I'll let you interpret them as you will.\n\nFrameworks and drivers\nFrameworks and drivers represent the outside world. They are the outer layer.\nTypes of frameworks and drivers: Delivery Mechanism, Framework, Database, UI, Web, Program, Console Runner\nInterface Adapters\nThe implementation detail of connecting your business rules with the outside world.\nTypes of Interface Adapters: Gateways, Repositories, Presenters, Controllers\nApplication Business Rules\nThe code that describes what your application does.\nTypes of Application Business Rules: Use Cases, Interactors, Actions\nThis layer also provides Ports, also know as Input/Ouputs Ports or Request/Response Models. These are the inputs and outputs of use cases.\nEnterprise Business Rules\nThe code that represents the nouns of your organisation.\nTypes of Enterprise Business Rules: Entities, Domains, Models, Records\nWhat to make of these nuances?\nI'm not sure what to make of it all. Clearly there are different strokes for different folks. People will interpret however they will. The best way you can equip yourself for such a non-uniform universe is to understand the variations in the use of Clean Architecture and begin to reason when certain approaches may work over others.\nYou'll never 100% get it. People will disagree with you. That's okay, haters gonna hate. Just make sure you are empathetic to others and be open minded."}},{"title":{"html":"On the importance of information management in agile delivery teams. Agile artefacts such as roadmaps, backlogs and boards are all too often ephemeral making it harder to get a full view of the digital products you are building and managing over time.
","plain":"On the importance of information management in agile delivery teams. Agile artefacts such as roadmaps, backlogs and boards are all too often ephemeral making it harder to get a full view of the digital products you are building and managing over time."},"slug":"importance-of-information-in-agile-delivery","tags":[],"publishedAt":{"pretty":"23rd September 2019","iso":"2019-09-23T00:00:00.000+00:00"},"canonical":"https://www.madetech.com/blog/importance-of-information-management-in-agile","featuredImage":"/media/user-story-map.jpg","content":{"html":"On the importance of information management in agile delivery teams. Agile artefacts such as roadmaps, backlogs and boards are all too often ephemeral making it harder to get a full view of the digital products you are building and managing over time.
\nI risk boring people with this topic, as well as upsetting the folk who dislike most forms of documentation, and the folk who love the code to be the documentation. I think it’s important nonetheless to address a problem I see affecting most digital products and their development teams.
\nI often ask these questions of product teams, but much rarer do I find the answers:
\nIn this blog post, I address the importance of information management in agile software development. Bear with me on this one…
\nIn waterfall software development, requirements are written before design and implementation phases. This model meant information flowed in a single direction, like a waterfall, which has proven to be inflexible for software development where requirements change regularly and learning is continuous. With the adoption of agile, teams could instead incrementally develop and document requirements.
\nWhat do we mean by “requirements” though? In software development, requirements are a way of expressing the needs and desired outcomes of a piece of software from various perspectives including business, user, architecture, functional, and non-functional requirements.
\nExpressing requirements, or as I prefer, needs and desired outcomes, of a particular software product is important for making sure stakeholders, from budget sponsors through to users, are happy and satisfied when that software is delivered. We build products to have an impact after all, whether that’s enabling members of the public to apply for a passport online while reducing the public cost of providing passports for taxpayers, or enabling businesses to sell online while enabling shoppers to purchase without needing to physically visit a town centre. Requirements enable us to express these desired outcomes.
\nExpressing needs and desired outcomes isn’t only important during development. Context is needed throughout the lifecycle of a product. Teams need to be able to continuously validate whether their products are having the desired outcomes, knowing what those desired outcomes are, is important. When new needs are required, we need to understand the current context of a product before suggesting changes. We should be able to retire functionality too, when a product is no longer having the impact it once was, and so understanding that impact is no longer being met requires past context.
\nThe current state of a product, in terms of the needs it is currently fulfilling and the outcomes and impact currently being worked towards, is an important asset. This information needs to be managed carefully and maintained throughout the products lifetime.
\nBeyond behavioural automated tests and manual test plans, in agile software development terms, I rarely find up to date documentation for the current state of a product. This means that understanding the existing context of a product is often an expensive process that involves conversations with team members who have been on the team for a long time and know this inside out. Even then, memory can be inaccurate or differ between members of a team. This means onboarding new members, or providing an overview to stakeholders outside of the team, is inefficient.
\nYou might think to yourself, hold on, isn’t the product backlog an up to date list of needs and desired outcomes?
\n\n\n“The Product Backlog is an ordered list of everything that is known to be needed in the product. It is the single source of requirements for any changes to be made to the product.”\nThe Scrum Guide™
\n
It’s true that the product backlog is an up to date list, but it’s a list of “requirements of any changes to be made to the product,” rather than an artefact representing the existing state of a product. A product backlog is a delta from the current state, towards the desired state. It’s ephemeral and constantly changing.
\nSo then, what to do about managing information about the existing context of a product?
\nIt should be a product manager’s concern to ensure that the team has an up to date list of:
\nHow exactly this information should be managed or where is for your team to decide. Some teams keep this information documented on a wall, some use a tool like confluence, some keep this information in Trello or JIRA. Some prefer to keep this information in code. Let your team decide what is best for them.
\nThe important thing here is to be able to understand the current context of a product, and to maintain a historical record of important events relating to the product. It is this historical record that maintains institutional memory, and enables easier onboarding and decision making in the future.
\nI’m planning on writing more on this topic in the future, including a deeper dive into product backlogs and their ephemeral nature, a guide on tools of better agile information management, and perhaps most importantly, a detailed example of product information architecture for product managers. Stay tuned!
\nThis post was originally published on Made Tech's Blog
","plain":"On the importance of information management in agile delivery teams. Agile artefacts such as roadmaps, backlogs and boards are all too often ephemeral making it harder to get a full view of the digital products you are building and managing over time.\n\n\nI risk boring people with this topic, as well as upsetting the folk who dislike most forms of documentation, and the folk who love the code to be the documentation. I think it’s important nonetheless to address a problem I see affecting most digital products and their development teams.\nI often ask these questions of product teams, but much rarer do I find the answers:\n\nWhat needs is your product currently meeting?\nWhat desired outcomes are you currently measuring yourself against?\nWhat impact has your product had to date?\n\nIn this blog post, I address the importance of information management in agile software development. Bear with me on this one…\nSoftware development requirements\nIn waterfall software development, requirements are written before design and implementation phases. This model meant information flowed in a single direction, like a waterfall, which has proven to be inflexible for software development where requirements change regularly and learning is continuous. With the adoption of agile, teams could instead incrementally develop and document requirements.\nWhat do we mean by “requirements” though? In software development, requirements are a way of expressing the needs and desired outcomes of a piece of software from various perspectives including business, user, architecture, functional, and non-functional requirements.\nExpressing requirements, or as I prefer, needs and desired outcomes, of a particular software product is important for making sure stakeholders, from budget sponsors through to users, are happy and satisfied when that software is delivered. We build products to have an impact after all, whether that’s enabling members of the public to apply for a passport online while reducing the public cost of providing passports for taxpayers, or enabling businesses to sell online while enabling shoppers to purchase without needing to physically visit a town centre. Requirements enable us to express these desired outcomes.\nContext throughout the lifecycle of a product\nExpressing needs and desired outcomes isn’t only important during development. Context is needed throughout the lifecycle of a product. Teams need to be able to continuously validate whether their products are having the desired outcomes, knowing what those desired outcomes are, is important. When new needs are required, we need to understand the current context of a product before suggesting changes. We should be able to retire functionality too, when a product is no longer having the impact it once was, and so understanding that impact is no longer being met requires past context.\nThe current state of a product, in terms of the needs it is currently fulfilling and the outcomes and impact currently being worked towards, is an important asset. This information needs to be managed carefully and maintained throughout the products lifetime.\nBeyond behavioural automated tests and manual test plans, in agile software development terms, I rarely find up to date documentation for the current state of a product. This means that understanding the existing context of a product is often an expensive process that involves conversations with team members who have been on the team for a long time and know this inside out. Even then, memory can be inaccurate or differ between members of a team. This means onboarding new members, or providing an overview to stakeholders outside of the team, is inefficient.\nA product backlog is a delta\nYou might think to yourself, hold on, isn’t the product backlog an up to date list of needs and desired outcomes?\n\n“The Product Backlog is an ordered list of everything that is known to be needed in the product. It is the single source of requirements for any changes to be made to the product.”\nThe Scrum Guide™\n\nIt’s true that the product backlog is an up to date list, but it’s a list of “requirements of any changes to be made to the product,” rather than an artefact representing the existing state of a product. A product backlog is a delta from the current state, towards the desired state. It’s ephemeral and constantly changing.\nSo then, what to do about managing information about the existing context of a product?\nManaging information – the agile way\nIt should be a product manager’s concern to ensure that the team has an up to date list of:\n\nNeeds being met by the product\nChanges made to the product, often expressed in (user) stories, so that as needs evolve you have a changelog that can provide context around why past decisions were made. The changes should be linked back to needs.\nTest plans for each story, or at least, every epic user story, for ensuring that core needs are being met\nTestable hypotheses or KPIs for tracking whether that story is having the desired outcomes and impact as intended\nHistoric record of tasks, incidents, and technical debt related to each story\nProduct managers need to manage this information about their product, ensuring that information is accurate and useful, while not burdensome for the team to maintain.\n\nHow exactly this information should be managed or where is for your team to decide. Some teams keep this information documented on a wall, some use a tool like confluence, some keep this information in Trello or JIRA. Some prefer to keep this information in code. Let your team decide what is best for them.\nThe important thing here is to be able to understand the current context of a product, and to maintain a historical record of important events relating to the product. It is this historical record that maintains institutional memory, and enables easier onboarding and decision making in the future.\nFuture series on agile information management\nI’m planning on writing more on this topic in the future, including a deeper dive into product backlogs and their ephemeral nature, a guide on tools of better agile information management, and perhaps most importantly, a detailed example of product information architecture for product managers. Stay tuned!\nThis post was originally published on Made Tech's Blog"}},{"title":{"html":"On the trouble you can encounter when trying to separate your domain logic from a framework like Rails.
","plain":"On the trouble you can encounter when trying to separate your domain logic from a framework like Rails."},"slug":"decoupling-the-delivery-mechanism","tags":["Clean Architecture","Ruby on Rails"],"publishedAt":{"pretty":"11th August 2018","iso":"2018-08-11T00:00:00.000+00:00"},"content":{"html":"On the trouble you can encounter when trying to separate your domain logic from a framework like Rails.
\nIf you work with Rails and haven't heard of Clean Architecture you may have heard of Hexagonal Architecture and most probably have heard of the Service Object pattern. These patterns seek to keep your controllers and models skinny by using Plain Old Ruby Objects (POROs) to model domain problems. If you do not know these patterns, I suggest you read up a little to understand the context in which this article is written.
\nWhen exploring concepts like Clean Architecture in a Rails context it's often tempting to cut corners. Perhaps rather than using test doubles for Rails dependencies inside your library code you decide to depend on them directly.
\n\n\nIn the example above we now have a direct dependency on the Rails model Ship
and also on the database itself. This means slower tests as they hit the DB and also means the system is harder to change as if you change your model you'll need to change this test too.
Or maybe you decided to use your model as a gateway rather than create a PORO adapter class to encapsulate the model.
\n\n\nHere we have another direct dependency on Rails and the database. Again slower tests and your library needs to change when your application changes.
\nOr you thought you could return models from your gateways and treat it like a domain object.
\n\n\nFinally in this example we return an ActiveRecord model from the gateway and therefore expose a large interface to the wider application. The problem here is that method calls to the Ship
model could trigger SQL queries meaning control of database performance is spread through the codebase rather than solely managed by gateways. This again makes the system harder to reason about and harder to change.
The problem with doing any of this is that you no longer have a library that represents your business logic independent of Rails, that is easy to test and easy to change. Instead you a left with a contrived and non-standard Rails setup that is harder to test and difficult to change. It would have been better to stay omakase.
\nIf running rspec spec/unit/lib
requires you to load rails_helper.rb
you've already fallen fowl of coupling your library to Rails. Allow me to apologise for the lack of information out there that might have helped you avoid this situation. At this point you have one of two options:
rails_helper.rb
app/
directory and keep to a more standard omakase approachThere's a more general rule here too that goes beyond Clean Architecture and Rails. Your library code should not depend on any delivery mechanism, database, or API. There should be no need to depend on database fixtures, factories or depend on framework or database classes being defined. This rule exists to make change cheap in the future.
\nOf course, you will likely have acceptance and feature tests that will depend on rails_helper.rb
and that's okay. You want to test when delivering your library via Rails that everything works in harmony. This will only be a certain percentage of your tests. Remember the testing pyramid?
As a rule the unit tests for your library, usually found in spec/unit/lib
, should not need to depend on Rails.
In example one we saw the gateway specs relying on Rails for setting up database state. We can avoid this by using RSpec's class_double
and instance_double
.
The test remains largely the same except that there is no direct dependency on Rails this time. We add another test, 'retrieves ship from model'
, to ensure that we call the mock as expected, this replaces the need to rely on the state of the database.
In example two we saw a use case using an ActiveRecord model directly as a gateway. Not only this but the spec directly depended on the model and database state via FactoryBot.
\n\n\nInstead of using ActiveRecord as the gateway we instead rely on an adapter gateway Space::Flight::ShipGateway
. We go even further by not directly depending on the gateway and instead use instance_double
to mock it out. This approach decouples the use case from ActiveRecord resulting in a spec that doesn't touch the database.
In example three Space::Flight::ShipGateway
returns an ActiveRecord model from it's #find_by_id
method. We really should have the discipline to return a domain object from the gateway instead.
Here we define Space::Flight::Ship
a domain object that exposes a limited amount a functions compared to an ActiveRecord model. Our gateway constructs this domain object and returns it.
It takes discipline as a software engineer to keep interfaces clean between the various layers of your application. This is especially true in Ruby where interfaces do not exist as part of it's OOP implementation.
\nDiscipline and experience leads to good architecture.
\n\n","plain":"On the trouble you can encounter when trying to separate your domain logic from a framework like Rails.\nIf you work with Rails and haven't heard of Clean Architecture you may have heard of Hexagonal Architecture and most probably have heard of the Service Object pattern. These patterns seek to keep your controllers and models skinny by using Plain Old Ruby Objects (POROs) to model domain problems. If you do not know these patterns, I suggest you read up a little to understand the context in which this article is written.\n\nWhen exploring concepts like Clean Architecture in a Rails context it's often tempting to cut corners. Perhaps rather than using test doubles for Rails dependencies inside your library code you decide to depend on them directly.\n\n\nIn the example above we now have a direct dependency on the Rails model Ship and also on the database itself. This means slower tests as they hit the DB and also means the system is harder to change as if you change your model you'll need to change this test too.\nOr maybe you decided to use your model as a gateway rather than create a PORO adapter class to encapsulate the model.\n\n\nHere we have another direct dependency on Rails and the database. Again slower tests and your library needs to change when your application changes.\nOr you thought you could return models from your gateways and treat it like a domain object.\n\n\nFinally in this example we return an ActiveRecord model from the gateway and therefore expose a large interface to the wider application. The problem here is that method calls to the Ship model could trigger SQL queries meaning control of database performance is spread through the codebase rather than solely managed by gateways. This again makes the system harder to reason about and harder to change.\nThe problem with doing any of this is that you no longer have a library that represents your business logic independent of Rails, that is easy to test and easy to change. Instead you a left with a contrived and non-standard Rails setup that is harder to test and difficult to change. It would have been better to stay omakase.\nYour library should not depend on Rails\nIf running rspec spec/unit/lib requires you to load rails_helper.rb you've already fallen fowl of coupling your library to Rails. Allow me to apologise for the lack of information out there that might have helped you avoid this situation. At this point you have one of two options:\n\nFind a way not to depend on rails_helper.rb\nMove your library code back into the app/ directory and keep to a more standard omakase approach\n\nThere's a more general rule here too that goes beyond Clean Architecture and Rails. Your library code should not depend on any delivery mechanism, database, or API. There should be no need to depend on database fixtures, factories or depend on framework or database classes being defined. This rule exists to make change cheap in the future.\nOf course, you will likely have acceptance and feature tests that will depend on rails_helper.rb and that's okay. You want to test when delivering your library via Rails that everything works in harmony. This will only be a certain percentage of your tests. Remember the testing pyramid?\n\nAs a rule the unit tests for your library, usually found in spec/unit/lib, should not need to depend on Rails.\nMocking out ActiveRecord in your gateway unit tests\nIn example one we saw the gateway specs relying on Rails for setting up database state. We can avoid this by using RSpec's class_double and instance_double.\n\n\nThe test remains largely the same except that there is no direct dependency on Rails this time. We add another test, 'retrieves ship from model', to ensure that we call the mock as expected, this replaces the need to rely on the state of the database.\nMocking out gateways in your use case unit tests\nIn example two we saw a use case using an ActiveRecord model directly as a gateway. Not only this but the spec directly depended on the model and database state via FactoryBot.\n\n\nInstead of using ActiveRecord as the gateway we instead rely on an adapter gateway Space::Flight::ShipGateway. We go even further by not directly depending on the gateway and instead use instance_double to mock it out. This approach decouples the use case from ActiveRecord resulting in a spec that doesn't touch the database.\nAvoid returning ActiveRecord models from your gateways\nIn example three Space::Flight::ShipGateway returns an ActiveRecord model from it's #find_by_id method. We really should have the discipline to return a domain object from the gateway instead.\n\n\nHere we define Space::Flight::Ship a domain object that exposes a limited amount a functions compared to an ActiveRecord model. Our gateway constructs this domain object and returns it.\nDiscipline as a software engineer\nIt takes discipline as a software engineer to keep interfaces clean between the various layers of your application. This is especially true in Ruby where interfaces do not exist as part of it's OOP implementation.\nDiscipline and experience leads to good architecture.\n\nGood architecture makes the system easy to understand, easy to develop, easy to maintain, and easy to deploy.\nClean Architecture by Robert C. Martin\n"}},{"title":{"html":"Good architecture makes the system easy to understand, easy to develop, easy to maintain, and easy to deploy.\nClean Architecture by Robert C. Martin
\n
On building lightweight Docker images for Go applications.
","plain":"On building lightweight Docker images for Go applications."},"slug":"lightweight-docker-images-for-go","tags":[],"publishedAt":{"pretty":"17th January 2017","iso":"2017-01-17T00:00:00.000+00:00"},"content":{"html":"On building lightweight Docker images for Go applications.
\nIn my last article I wrote about deploying Go apps to Now. I arrived at a solution that compiled a Go app inside a Docker container. This means that the Docker container needed to be built with all the dependencies necessary to compile Go code into something useful.
\nWe can find out the size of an image by building it with a tag. Using the article's example hello-world app we can run docker build
in it's directory.
$ docker build -t hello-world .\n\nSending build context to Docker daemon 5.701 MB\nStep 1 : FROM golang:alpine\n ---> 00371bbb49d5\nStep 2 : ADD . /go/src/github.com/lukemorton/hello-world\n ---> Using cache\n ---> dda524fc2668\nStep 3 : RUN go install github.com/lukemorton/hello-world\n ---> Using cache\n ---> f830049507ec\nStep 4 : CMD /go/bin/hello-world\n ---> Using cache\n ---> ba41def5c5d6\nStep 5 : EXPOSE 3000\n ---> Using cache\n ---> 9bd3101ccc6b\nSuccessfully built 9bd3101ccc6b
Once the image has been built and tagged we can then check the size with the docker images
command:
$ docker images hello-world --format {{.Size}}\n\n251.9 MB
I filtered the results of docker images by passing the tag I gave the image when I built it, hello-world
. I also provided the --format
flag to only output the size. Try running docker images
without any arguments to see a more detailed list of your images.
Okay so how much larger is this image than the Go binary that it compiles? In other words, how much cruft does the Docker image add? From the directory of our hello world example we can find out.
\n$ go build -o hello-world .\n$ du -kh hello-world\n\n5.4M hello-world
Woah, 5.4MB of the 251.9MB image is taken up by our application. Thats about 2% of the image size. The rest of the space is taken up by the operating system and dependencies required to build the binary.
\nIt's worth saying that the base image I used was golang:alpine
which is the smallest possible image on which you can build Go code. If you change FROM golang:alpine
in the Dockerfile
to FROM golang
, compile the Docker image and check the size, you'll see it's much bigger.
$ docker build -t hello-world:large .\n$ docker images hello-world:large --format {{.Size}}\n\n691 MB
That's like over 2.7 times the size of our alpine based image.
\nWhat if I told you we could get the image size down the size of our binary? You'd believe me right :)
\nIn order to get the image size down further we need to make a decision to compile our Go application outside of the Docker container. We then switch our base image from golang:alpine
to scratch
, the lightest image of them all, it's empty! It's name is actually a pun, FROM scratch
.
I'm getting excited, let's update our Dockerfile:
\nFROM scratch\nADD hello-world /\nCMD ["/hello-world"]\nEXPOSE 3000
Now before we run docker build
we need to compile our Go binary before hand. If you notice the ADD hello-world /
that's copying the binary into the image. We need to build it.
$ CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o hello-world .
Unlike my original article, we provide a number of flags to the go build
command. This is making our Go binary portable enough to run inside our empty image. Without these flags we get errors about missing shared libraries, it gets real ugly real quick, trust me.
Now let's build:
\n$ docker build -t hello-world:light .\n$ docker images hello-world:light --format {{.Size}}\n\n5.635 MB
We did it! Small huh? Let me know what you think on Twitter @LukeMorton.
\nFROM golang:alpine
.A walkthrough on how to use Docker to deploy a Go app on Zeit's Now realtime global deployment platform.
","plain":"A walkthrough on how to use Docker to deploy a Go app on Zeit's Now realtime global deployment platform."},"slug":"deploying-go-on-zeit-now","tags":[],"publishedAt":{"pretty":"15th January 2017","iso":"2017-01-15T00:00:00.000+00:00"},"content":{"html":"A walkthrough on how to use Docker to deploy a Go app on Zeit's Now realtime global deployment platform.
\nI recently moved www.lukemorton.co.uk to being hosted on Now. It's a node.js app built with Next.js and I'm telling ya, it was a great experience as Now is super easy to get going on.
\nAfter moving this site across I felt excited to try Now for something else. I've always wanted to write an API in Go and Now can serve pretty much any tech stack as it supports Docker. So I gave it a shot and this hello world blog post documents my learnings.
\nIn case you haven't built a Go app before, I'll quickly go over how to set it up on your machine.
\nmkdir ~/GoWork
and then export $GOPATH=~/GoWork
npm install -g now-cli
Now let's create a quick HTTP hello world example in Go. First we need a directory within your Go workspace to:
\nmkdir -p $GOPATH/src/github.com/lukemorton/hello-world
You can replace lukemorton
with your own GitHub username.
Now we need some code, below is my quick Go hello world. It's a very simple hello world web app that runs on port 3000. There's some logging in the to let you know when it starts and if something goes wrong.
\npackage main\n\nimport (\n "fmt"\n "log"\n "net/http"\n)\n\nfunc main() {\n http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {\n fmt.Fprintln(w, "Hello world")\n })\n\n log.Println("Serving on localhost:3000")\n err := http.ListenAndServe(":3000", nil)\n log.Fatal(err)\n}
\nSave this file in your new directory as hello.go
. Now we can run it:
go run hello.go\n# => 2017/01/15 15:36:26 Serving on localhost:3000
Visit your hello world in your browser http://localhost:3000. You should see "Hello world" printed.
\nCoooooool, so we've got our Go web app and now we want to deploy it. To do this we are going to create a Docker image that compiles and runs our Go app for us. Once we have this we can deploy it to Now.
\nWe then create our Dockerfile
which uses golangs official alpine image. Alpine is a lightweight operating system ideal of creating small(er) docker images.
FROM golang:alpine\nADD . /go/src/github.com/lukemorton/hello-world\nRUN go install github.com/lukemorton/hello-world\nCMD ["/go/bin/hello-world"]\nEXPOSE 3000
Not much to it is there. Make sure you save this as Dockerfile
in the same directory as your hello.go
file. All it does is copy hello.go
into the container, compiles it into a binary, runs that binary and exposes the port 3000. Again you can replace lukemorton
with your own GitHub username.
Now we are ready to deploy:
\nnow
And thats it. It'll upload your files, build a Docker container and run it for you. In case you wanted to look over all the files, I've placed them on my GitHub. Let me know what you think on Twitter @LukeMorton.
","plain":"A walkthrough on how to use Docker to deploy a Go app on Zeit's Now realtime global deployment platform.\nI recently moved www.lukemorton.co.uk to being hosted on Now. It's a node.js app built with Next.js and I'm telling ya, it was a great experience as Now is super easy to get going on.\nAfter moving this site across I felt excited to try Now for something else. I've always wanted to write an API in Go and Now can serve pretty much any tech stack as it supports Docker. So I gave it a shot and this hello world blog post documents my learnings.\nSetting up your development environment\nIn case you haven't built a Go app before, I'll quickly go over how to set it up on your machine.\n\nInstall Go for your platform\nThen create your Go workspace, it's an odd concept but essentially vendored libraries and your own projects will be installed in this workspace, run mkdir ~/GoWork and then export $GOPATH=~/GoWork\nInstall the now-cli with npm install -g now-cli\n\nCreating a hello world\nNow let's create a quick HTTP hello world example in Go. First we need a directory within your Go workspace to:\nmkdir -p $GOPATH/src/github.com/lukemorton/hello-worldYou can replace lukemorton with your own GitHub username.\nNow we need some code, below is my quick Go hello world. It's a very simple hello world web app that runs on port 3000. There's some logging in the to let you know when it starts and if something goes wrong.\npackage main\n\nimport (\n "fmt"\n "log"\n "net/http"\n)\n\nfunc main() {\n http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {\n fmt.Fprintln(w, "Hello world")\n })\n\n log.Println("Serving on localhost:3000")\n err := http.ListenAndServe(":3000", nil)\n log.Fatal(err)\n}\nSave this file in your new directory as hello.go. Now we can run it:\ngo run hello.go\n# => 2017/01/15 15:36:26 Serving on localhost:3000Visit your hello world in your browser http://localhost:3000. You should see "Hello world" printed.\nDeploying with Docker\nCoooooool, so we've got our Go web app and now we want to deploy it. To do this we are going to create a Docker image that compiles and runs our Go app for us. Once we have this we can deploy it to Now.\nWe then create our Dockerfile which uses golangs official alpine image. Alpine is a lightweight operating system ideal of creating small(er) docker images.\nFROM golang:alpine\nADD . /go/src/github.com/lukemorton/hello-world\nRUN go install github.com/lukemorton/hello-world\nCMD ["/go/bin/hello-world"]\nEXPOSE 3000Not much to it is there. Make sure you save this as Dockerfile in the same directory as your hello.go file. All it does is copy hello.go into the container, compiles it into a binary, runs that binary and exposes the port 3000. Again you can replace lukemorton with your own GitHub username.\nNow we are ready to deploy:\nnowAnd thats it. It'll upload your files, build a Docker container and run it for you. In case you wanted to look over all the files, I've placed them on my GitHub. Let me know what you think on Twitter @LukeMorton."}},{"title":{"html":"On structuring Rails apps for growth. Often a tricky area this article will walk you through a refactor and hopefully you'll walk away with a few more ideas for structuring your business logic.
","plain":"On structuring Rails apps for growth. Often a tricky area this article will walk you through a refactor and hopefully you'll walk away with a few more ideas for structuring your business logic."},"slug":"business-logic-in-rails","tags":["Ruby on Rails"],"publishedAt":{"pretty":"24th September 2016","iso":"2016-09-24T00:00:00.000+00:00"},"content":{"html":"On structuring Rails apps for growth. Often a tricky area this article will walk you through a refactor and hopefully you'll walk away with a few more ideas for structuring your business logic.
\nI read and loved Tom Dalling's post about Isolating Side Effects in Ruby today and agree with a lot of his sentiments with regards to functional core, imperative shell. I want to expand on the testing of business logic (also known as domain logic) in Rails with his examples and continue on to explain how we can evolve our applications as we add more features to them. I'll be referring to the post quite a bit so it is probably best you read that first.
\nTom discusses moving the business logic into what he calls functionally pure methods within a static/singleton class. His use of the phrase "functionally pure" is quite the liberty as he admits in his own article.
\n\n\nRevisiting the Billing
module we can observe a few things:
.billable_accounts
performs an SQL query using an ActiveRecord object.monthly_bill
returns an initialised ActiveRecord object.discounts
is functionally pure business logicThe first two methods aren't really functionally pure but how does this affect their testability? We can jump straight into testing .billable_accounts
.
Unfortunately this test hit the DB. The fact the method hits the DB will to some mean the method isn't functionally pure or business logic at all. The method isn't functionally pure because even though you do not pass any parameters, the values it change can vary depending on what is in the DB. Pure functions should return the same results every time they are called with the same arguments. It's not business logic either as it deals with implmentation specific details such as it is use of ActiveRecord methods.
\nFor now, I suppose we could do some mocking to get around this.
\n\n\nI'm not opposed to resorting to this if we need to get a method under test quickly. Luckily ruby and RSpec make this kind of thing easy. You certainly would not be able to do this in PHP or Java.
\nMoving onto .monthly_bill
we should notice it is a little easier to test.
The tests here are not too bad. .monthly_bill
is easier to test as it is mostly business logic and doesn't rely on complex external interfaces. The only external interfaces it relys on is account#plan#amount
, account#type
and Bill.new
.
If we structure our assertions into an actual RSpec suite, our tests describe our billing domain well. This isn't a bad place to be. The suite entirely avoids hitting the DB so it'll be fast.
\n\n\nOne thing to note is that we are testing .discounts
in the .monthly_bill
example as well as in it is own specific test. To me this signals that we are probably exposing functionality that does not need to be exposed. Calculating discounts is only used when we are creating a monthly bill so we can probably hide that functionality and test it indirectly with our "creating monthly bill" context.
After making .discounts
private our test suite will begin to fail with NoMethodError: private method 'discount' called for Billing:Module
. This is okay, we can now delete this failing test.
Tom goes on to talk about Skinny Models and using objects in Rails to model actions rather than things. The Billing
module is an object that performs actions rather than modelling a thing.
The Account
and Bill
ActiveRecord objects do model things, but we've kept the business logic separate by not placing that logic inside the models.
Unfortunately the way we've built Billing
module means it will only have a short shelf life. Billing is a large domain so the module will likely get bigger and bigger. Not only that but it is responsible for two separate actions: querying billing accounts and creating bills. Once we start adding more billing related actions to this module, for example refunding a bill, the test suite will grow along with the module itself which to me means it is more difficult for Engineers to quickly understand the responsibilities of the module and therefore more difficult to change it.
Luckily the piece of wisdom shared in the post provides the answer.
\n\n\nEnlightenment comes when you use objects in a server-side web application to model actions, not things.
\n– Brad Urani
\n
We need to use objects (read: plural) to model actions. We simply need to split the file down into responsibilities.
\nWhat would breaking down the billing module into individual actions look like?
\n\n\nWe can then split out the RSpec examples.
\n\n\nWe've now got two billing modules for two different topics, Billing::Accounts
and Billing::MonthlyBill
. However from the name of these modules it still feels like we've moved back to modelling things rather than actions.
In order to categorise our logic into actions we need to think about triggers the actions. What is consuming our business logic? From Tom's original example he was tying everything together in a job class.
\n\n\nAbove is the updated job to match the changes we've made in this article. From the name of the classes I'm still not getting a clear picture what is happening here. Reading the code of the job does tell us, but it is not easy to understand at a glance. What if the job called an action object?
\n\n\nI think this is a lot easier to understand. We're passing in our create_and_send_monthly_bill
object and calling #to_all_accounts
on it. From the name of the parameter and the method called we paint a clear picture of what is going on.
As you can see our MonthlyBillingJob
can now be tested without as many mocks as before.
We of course now need to create our create and send monthly bill action.
\n\n\nEssentially the code from the job class is now in this domain specific action class. The RSpec example will therefore be fairly similar to the old job spec.
\n\n\nQuite a jumble as in the original post. I think this in itself is a smell about the way our code works. We have to do a fair bit of mocking in order to test our action because our CreateAndSendMonthlyBill
action calls Billing::Accounts
and Billing::MonthlyBill
directly. We are also duplicating our testing efforts again.
One solution to this problem would be to inject Billing::Accounts
and Billing::MonthlyBill
into our action. This will allow us to create doubles in our test and pass those in. This would mean our mocking would be simplified and we will reduce the duplication of our tests.
Ha, that didn't go as well as I expected. It's actually more number of lines than our previous test. I think this is a reflection of the design of our business logic. Business logic should be easy to understand and easy to test. These properties should exist when we reach a good design. I often find gut instinct tells me if the design is good and I think this is informed by how easily my brain can understand the code.
\nWe'll need to update our implementation of CreateAndSendMonthlyBill
to satisfy this test.
We've called the variable storing the MonthlyBill
monthly_bill_initialiser
which does clearly explain what it does, but the method #to_all_accounts
is now a little harder to understand.
We should probably move the creation and sending of bills into their own actions that are composed together in order to achieve the larger create and send action. The CreateAndSendMonthlyBill
contains the word "and". This to me signals there are two separate concerns here. We could move the two concerns into their own classes and then use them within the bigger action.
Looking at CreateAndSendMonthlyBill#to_all_accounts
the code now makes more sense when you read it.
Our test can now be split up which will reduce the complexity of them and make them easy to understand too.
\n\n\nOur app is a lot easier to understand from the filesystem level too.
\n\n\nWe can at a glance of the file names know what our application does.
\nWe are almost there but we still have our Billing::Accounts
and Billing::MonthlyBill
modules that represent things rather than actions.
Billing::Accounts
is an easy win as we just change the class name to begin with a verb, "find".
Billing::MonthlyBill
is a little harder to change. It is responsible for initialising a Bill
object with a correct amount. This feels very much related to the creation of the bill to me. It's almost as if we could move all the logic into Billing::CreateMonthlyBill
.
Doing this violates one of Tom's rules about not mixing business logic with things that have side effects. However for me, at this point in time no other object needs to initialise a Bill
with the same logic so until that need arises I would in fact keep it all in this class.
You'll have probably noticed that we now inject an empty Bill
object. This is to keep things easy to test.
The tests don't look too bad at all. Our folder structure is looking really informative too.
\n\n\nCode's testability is very much affected by its design and structure. You might say that your tests inform the design of your code. I prefer to think that the design supports easier testing because it has an easy to understand structure. Code that is easily tested is typically easier to understand.
\nBusiness logic is easier to understand when expressed as actions. This allows Engineers to understand the function of your domain by simply reading file names. It also means it is easy to find relevant parts of your domain and they remain easy to test.
\nThe structure presented in this article isn't new. Uncle Bob and Gary Bernhardt along with many others have been talking about this before. Some call "actions" by their other name "use case classes".
\nHopefully with this design you can avoid fat controllers and fat models. Instead we can have skinny everything when we break down our domain into easy to understand pieces.
\nThanks for bearing with me, and feel free to tweet your feedback to me.
","plain":"On structuring Rails apps for growth. Often a tricky area this article will walk you through a refactor and hopefully you'll walk away with a few more ideas for structuring your business logic.\nI read and loved Tom Dalling's post about Isolating Side Effects in Ruby today and agree with a lot of his sentiments with regards to functional core, imperative shell. I want to expand on the testing of business logic (also known as domain logic) in Rails with his examples and continue on to explain how we can evolve our applications as we add more features to them. I'll be referring to the post quite a bit so it is probably best you read that first.\nTom discusses moving the business logic into what he calls functionally pure methods within a static/singleton class. His use of the phrase "functionally pure" is quite the liberty as he admits in his own article.\n\n\nRevisiting the Billing module we can observe a few things:\n\n.billable_accounts performs an SQL query using an ActiveRecord object\n.monthly_bill returns an initialised ActiveRecord object\n.discounts is functionally pure business logic\n\nTesting business logic\nThe first two methods aren't really functionally pure but how does this affect their testability? We can jump straight into testing .billable_accounts.\n\n\nUnfortunately this test hit the DB. The fact the method hits the DB will to some mean the method isn't functionally pure or business logic at all. The method isn't functionally pure because even though you do not pass any parameters, the values it change can vary depending on what is in the DB. Pure functions should return the same results every time they are called with the same arguments. It's not business logic either as it deals with implmentation specific details such as it is use of ActiveRecord methods.\nFor now, I suppose we could do some mocking to get around this.\n\n\nI'm not opposed to resorting to this if we need to get a method under test quickly. Luckily ruby and RSpec make this kind of thing easy. You certainly would not be able to do this in PHP or Java.\nMoving onto .monthly_bill we should notice it is a little easier to test.\n\n\nThe tests here are not too bad. .monthly_bill is easier to test as it is mostly business logic and doesn't rely on complex external interfaces. The only external interfaces it relys on is account#plan#amount, account#type and Bill.new.\nReviewing the test suite\nIf we structure our assertions into an actual RSpec suite, our tests describe our billing domain well. This isn't a bad place to be. The suite entirely avoids hitting the DB so it'll be fast.\n\n\nOne thing to note is that we are testing .discounts in the .monthly_bill example as well as in it is own specific test. To me this signals that we are probably exposing functionality that does not need to be exposed. Calculating discounts is only used when we are creating a monthly bill so we can probably hide that functionality and test it indirectly with our "creating monthly bill" context.\n\n\nAfter making .discounts private our test suite will begin to fail with NoMethodError: private method 'discount' called for Billing:Module. This is okay, we can now delete this failing test.\nGrowing the domain\nTom goes on to talk about Skinny Models and using objects in Rails to model actions rather than things. The Billing module is an object that performs actions rather than modelling a thing.\nThe Account and Bill ActiveRecord objects do model things, but we've kept the business logic separate by not placing that logic inside the models.\nUnfortunately the way we've built Billing module means it will only have a short shelf life. Billing is a large domain so the module will likely get bigger and bigger. Not only that but it is responsible for two separate actions: querying billing accounts and creating bills. Once we start adding more billing related actions to this module, for example refunding a bill, the test suite will grow along with the module itself which to me means it is more difficult for Engineers to quickly understand the responsibilities of the module and therefore more difficult to change it.\nLuckily the piece of wisdom shared in the post provides the answer.\n\nEnlightenment comes when you use objects in a server-side web application to model actions, not things.\n– Brad Urani\n\nWe need to use objects (read: plural) to model actions. We simply need to split the file down into responsibilities.\nWhat would breaking down the billing module into individual actions look like?\n\n\nWe can then split out the RSpec examples.\n\n\nModel actions not things\nWe've now got two billing modules for two different topics, Billing::Accounts and Billing::MonthlyBill. However from the name of these modules it still feels like we've moved back to modelling things rather than actions.\nIn order to categorise our logic into actions we need to think about triggers the actions. What is consuming our business logic? From Tom's original example he was tying everything together in a job class.\n\n\nAbove is the updated job to match the changes we've made in this article. From the name of the classes I'm still not getting a clear picture what is happening here. Reading the code of the job does tell us, but it is not easy to understand at a glance. What if the job called an action object?\n\n\nI think this is a lot easier to understand. We're passing in our create_and_send_monthly_bill object and calling #to_all_accounts on it. From the name of the parameter and the method called we paint a clear picture of what is going on.\n\n\nAs you can see our MonthlyBillingJob can now be tested without as many mocks as before.\nWe of course now need to create our create and send monthly bill action.\n\n\nEssentially the code from the job class is now in this domain specific action class. The RSpec example will therefore be fairly similar to the old job spec.\n\n\nQuite a jumble as in the original post. I think this in itself is a smell about the way our code works. We have to do a fair bit of mocking in order to test our action because our CreateAndSendMonthlyBill action calls Billing::Accounts and Billing::MonthlyBill directly. We are also duplicating our testing efforts again.\nOne solution to this problem would be to inject Billing::Accounts and Billing::MonthlyBill into our action. This will allow us to create doubles in our test and pass those in. This would mean our mocking would be simplified and we will reduce the duplication of our tests.\n\n\nHa, that didn't go as well as I expected. It's actually more number of lines than our previous test. I think this is a reflection of the design of our business logic. Business logic should be easy to understand and easy to test. These properties should exist when we reach a good design. I often find gut instinct tells me if the design is good and I think this is informed by how easily my brain can understand the code.\nWe'll need to update our implementation of CreateAndSendMonthlyBill to satisfy this test.\n\n\nWe've called the variable storing the MonthlyBill monthly_bill_initialiser which does clearly explain what it does, but the method #to_all_accounts is now a little harder to understand.\nComposing actions\nWe should probably move the creation and sending of bills into their own actions that are composed together in order to achieve the larger create and send action. The CreateAndSendMonthlyBill contains the word "and". This to me signals there are two separate concerns here. We could move the two concerns into their own classes and then use them within the bigger action.\n\n\nLooking at CreateAndSendMonthlyBill#to_all_accounts the code now makes more sense when you read it.\nOur test can now be split up which will reduce the complexity of them and make them easy to understand too.\n\n\nOur app is a lot easier to understand from the filesystem level too.\n\n\nWe can at a glance of the file names know what our application does.\nFinishing up\nWe are almost there but we still have our Billing::Accounts and Billing::MonthlyBill modules that represent things rather than actions.\n\n\nBilling::Accounts is an easy win as we just change the class name to begin with a verb, "find".\nBilling::MonthlyBill is a little harder to change. It is responsible for initialising a Bill object with a correct amount. This feels very much related to the creation of the bill to me. It's almost as if we could move all the logic into Billing::CreateMonthlyBill.\n\n\nDoing this violates one of Tom's rules about not mixing business logic with things that have side effects. However for me, at this point in time no other object needs to initialise a Bill with the same logic so until that need arises I would in fact keep it all in this class.\nYou'll have probably noticed that we now inject an empty Bill object. This is to keep things easy to test.\n\n\nThe tests don't look too bad at all. Our folder structure is looking really informative too.\n\n\nConclusion\nCode's testability is very much affected by its design and structure. You might say that your tests inform the design of your code. I prefer to think that the design supports easier testing because it has an easy to understand structure. Code that is easily tested is typically easier to understand.\nBusiness logic is easier to understand when expressed as actions. This allows Engineers to understand the function of your domain by simply reading file names. It also means it is easy to find relevant parts of your domain and they remain easy to test.\nThe structure presented in this article isn't new. Uncle Bob and Gary Bernhardt along with many others have been talking about this before. Some call "actions" by their other name "use case classes".\nHopefully with this design you can avoid fat controllers and fat models. Instead we can have skinny everything when we break down our domain into easy to understand pieces.\nThanks for bearing with me, and feel free to tweet your feedback to me."}},{"title":{"html":"A story of fight over flight. Or how doing the things you're uncomfortable with\ncan help you in the long run.
","plain":"A story of fight over flight. Or how doing the things you're uncomfortable with\ncan help you in the long run."},"slug":"do-the-thing-that-hurts-the-most","tags":[],"publishedAt":{"pretty":"12th January 2016","iso":"2016-01-12T00:00:00.000+00:00"},"content":{"html":"A story of fight over flight. Or how doing the things you're uncomfortable with\ncan help you in the long run.
\nLiving life can hurt sometimes. There is a lot of pleasure in the world but pain\nexists. Not only does it exist but it is important, it is a survival instinct.\nPain is the thing that triggers your fight or flight. This blog post is about\nchoosing to fight and the advantages of fighting rather than flying off.
\nNow let me start by saying this post isn't about violence. It is about a braver\nkind of fighting that doesn't harm anyone. It's in relation to being a better\ndeveloper though this practice can stand you in good stead for handling life\ntoo.
\nPain is an indicator that something is going wrong. Noticing pain is your chance\nto fix the thing that's going wrong. Take a team that are suffering from slow\nand risky deployments.
\nA team is in charge of deploying a large and rather unmagnificent monolithic\napplication. It is built in PHP and has a mix of spaghetti western style and\nsome newer features in Symfony v1.
\nDeployments of the application are done over FTP with a maintenance mode so that\ndatabase changes can be made by hand without resulting in user data loss. At\nleast there is a staging version of the site for dry runs though keeping things\nconsistent across environments is a continued pain for the team.
\nEvery member of the team feels the pain. The project manager has a strict\nrelease management process that involves NASA-style checklists for every deploy\nand backout plans if anything goes wrong mid process. There is a strict\nquarterly deploy cycle since contemplating more regular deploys is just too\nscary.
\nThis pain and the fear thereof is indicative of an infection untreated.\nPlastered over carelessly the situation is only getting worse with time. Sure\ndeploys are spaced apart but they still go wrong most of the time. Enacting the\nbackout procedure finishes off what remains of the day.
\nWhen they finally get all the changes made to production they take the\nmaintenance mode page down only to find more bugs reported by users. These bugs\noften find themselves mysteriously fixed outside of the release cycle.
\nDrifting code on production then requires back porting to staging when\ndifferences are noticed. No one really knows which environment should be\ncanonical. The sign up on staging has a remember me feature missing on production\nand production has email verification that staging does not. The team vaguely\nremember email verification being too hard to setup for staging as well as\nproduction. This problem needless to say is self perpetuating. The more it\ncontinues the harder change becomes.
\nChange gets harder because of two factors:
\nSo the more the team fears the deploys, the more process they put in place, the\nslower things get and the worse things become. Fear of pain in this case has\nresulted in flight mode for the company.
\nIn this case, fighting is the solution. The pain indicates the solution. If\ndeploys are painful, and doing them less is causing problems, go in the other\ndirection. Deploys are painful so do them more.
\nOnce the team comes to the agreement that the ever slowing process isn't helping\nthey brainstorm to identify problem areas. Large amounts of time spent on UAT to\ncover 3 months worth of changes. Equally large amounts of time spent on planning\nAPI integrations which still go wrong when deployed. Differences between staging\nand production mean deployments still require debugging when going to\nproduction. Paperwork specifications and deploy plans are usually out of date\nand inaccurate. Bugs need to be fixed quicker than every 3 months.
\nThey realise there is too much work around their deploys but struggle to think\nof anything other than inventing more process and bureaucracy. Then one day a\ndeveloper coming back from a conference suggests the revolutionary idea of\ncontinuous delivery. The idea that deploying more often will reduce the pains\naround deployment.
\nInitially everyone was scared of making their pain more regular. They worried\nthat daily deploys would lead to no time for anything other than deployment.\nHowever a VP had heard the crazy idea circulating, did her own research and\nenforced a mandate to increase the teams delivery rate.
\nThe team came up with a plan. They would aim to deliver every 10 business\ndays. They decided any problem they face they would solve rather than use it as\na reason as to why regular delivery was a bad idea.
\nThe first struggle was the time it took to deploy. Instead of running from the\npain they instead looked for ways to reduce the time it took to deploy. Most of\nthe time taken was lack of parity between environments. To resolve this they\nsetup a code repository so they had a canonical source for their code. They then\nsetup an SSH script to use git to clone their code to each environment instead\nof FTP.
\nWith repeatable deployments they then had to focus on what they were going to\ndeliver. They started to plan their work into deployable achievable chunks. They\nhad to ensure code completion two days before deployment in order to ensure\nenough time for UAT. This is lightyears faster than their process before!
\nAlthough not at the eXtreme end of the agile spectrum this team have felt the\nrewards of pushing against their fears and coming out the other side. They did\nthe thing that hurt until it hurt no more.
\nOf course you should not do everything that hurts. That would be silly.
\nWriting unit tests that sometimes pass and sometimes fail does not mean you\nshould start writing those kinds of tests more.
\nYou can also speed up too fast and cause disasters. If the team described above\ndid not fix their deployment pipeline first or simply cut out UAT to speed up\ntheir cycle they may have simply deployed code that was not fit for purpose.
\nYou have to take your advice with a pinch of salt.
\nThat said, a lot of the pain and fear of software delivery is caused by\nmalpractice sustained by unnecessary process. Management layers, excessive\nmeetings and planning, bureaucracy generally are all symptoms that we are\noperating under the condition of fear.
\nThrough facing up to indicative pain, in looking for actual solutions and\nremoving process where it bandages wounds that could have been avoided\naltogether, software teams can move faster and enjoy themselves at the same\ntime.
\nLet me know what you think. Share your own experiences. Tweet me\n@LukeMorton.
","plain":"A story of fight over flight. Or how doing the things you're uncomfortable with\ncan help you in the long run.\nLiving life can hurt sometimes. There is a lot of pleasure in the world but pain\nexists. Not only does it exist but it is important, it is a survival instinct.\nPain is the thing that triggers your fight or flight. This blog post is about\nchoosing to fight and the advantages of fighting rather than flying off.\nNow let me start by saying this post isn't about violence. It is about a braver\nkind of fighting that doesn't harm anyone. It's in relation to being a better\ndeveloper though this practice can stand you in good stead for handling life\ntoo.\nFeel the pain\nPain is an indicator that something is going wrong. Noticing pain is your chance\nto fix the thing that's going wrong. Take a team that are suffering from slow\nand risky deployments.\nA team is in charge of deploying a large and rather unmagnificent monolithic\napplication. It is built in PHP and has a mix of spaghetti western style and\nsome newer features in Symfony v1.\nDeployments of the application are done over FTP with a maintenance mode so that\ndatabase changes can be made by hand without resulting in user data loss. At\nleast there is a staging version of the site for dry runs though keeping things\nconsistent across environments is a continued pain for the team.\nEvery member of the team feels the pain. The project manager has a strict\nrelease management process that involves NASA-style checklists for every deploy\nand backout plans if anything goes wrong mid process. There is a strict\nquarterly deploy cycle since contemplating more regular deploys is just too\nscary.\nSelf-perpetuating problems\nThis pain and the fear thereof is indicative of an infection untreated.\nPlastered over carelessly the situation is only getting worse with time. Sure\ndeploys are spaced apart but they still go wrong most of the time. Enacting the\nbackout procedure finishes off what remains of the day.\nWhen they finally get all the changes made to production they take the\nmaintenance mode page down only to find more bugs reported by users. These bugs\noften find themselves mysteriously fixed outside of the release cycle.\nDrifting code on production then requires back porting to staging when\ndifferences are noticed. No one really knows which environment should be\ncanonical. The sign up on staging has a remember me feature missing on production\nand production has email verification that staging does not. The team vaguely\nremember email verification being too hard to setup for staging as well as\nproduction. This problem needless to say is self perpetuating. The more it\ncontinues the harder change becomes.\nChange gets harder because of two factors:\n\nAs the process is slow and cumbersome so too is change to the application\nAs the team gets battered by process the less energy they have fighting it\nto make changes\n\nSo the more the team fears the deploys, the more process they put in place, the\nslower things get and the worse things become. Fear of pain in this case has\nresulted in flight mode for the company.\nDeploys are painful so do them more\nIn this case, fighting is the solution. The pain indicates the solution. If\ndeploys are painful, and doing them less is causing problems, go in the other\ndirection. Deploys are painful so do them more.\nOnce the team comes to the agreement that the ever slowing process isn't helping\nthey brainstorm to identify problem areas. Large amounts of time spent on UAT to\ncover 3 months worth of changes. Equally large amounts of time spent on planning\nAPI integrations which still go wrong when deployed. Differences between staging\nand production mean deployments still require debugging when going to\nproduction. Paperwork specifications and deploy plans are usually out of date\nand inaccurate. Bugs need to be fixed quicker than every 3 months.\nThey realise there is too much work around their deploys but struggle to think\nof anything other than inventing more process and bureaucracy. Then one day a\ndeveloper coming back from a conference suggests the revolutionary idea of\ncontinuous delivery. The idea that deploying more often will reduce the pains\naround deployment.\nInitially everyone was scared of making their pain more regular. They worried\nthat daily deploys would lead to no time for anything other than deployment.\nHowever a VP had heard the crazy idea circulating, did her own research and\nenforced a mandate to increase the teams delivery rate.\nFacing the fear\nThe team came up with a plan. They would aim to deliver every 10 business\ndays. They decided any problem they face they would solve rather than use it as\na reason as to why regular delivery was a bad idea.\nThe first struggle was the time it took to deploy. Instead of running from the\npain they instead looked for ways to reduce the time it took to deploy. Most of\nthe time taken was lack of parity between environments. To resolve this they\nsetup a code repository so they had a canonical source for their code. They then\nsetup an SSH script to use git to clone their code to each environment instead\nof FTP.\nWith repeatable deployments they then had to focus on what they were going to\ndeliver. They started to plan their work into deployable achievable chunks. They\nhad to ensure code completion two days before deployment in order to ensure\nenough time for UAT. This is lightyears faster than their process before!\nAlthough not at the eXtreme end of the agile spectrum this team have felt the\nrewards of pushing against their fears and coming out the other side. They did\nthe thing that hurt until it hurt no more.\nPinch of salt\nOf course you should not do everything that hurts. That would be silly.\nWriting unit tests that sometimes pass and sometimes fail does not mean you\nshould start writing those kinds of tests more.\nYou can also speed up too fast and cause disasters. If the team described above\ndid not fix their deployment pipeline first or simply cut out UAT to speed up\ntheir cycle they may have simply deployed code that was not fit for purpose.\nYou have to take your advice with a pinch of salt.\nMove faster, enjoy yourself\nThat said, a lot of the pain and fear of software delivery is caused by\nmalpractice sustained by unnecessary process. Management layers, excessive\nmeetings and planning, bureaucracy generally are all symptoms that we are\noperating under the condition of fear.\nThrough facing up to indicative pain, in looking for actual solutions and\nremoving process where it bandages wounds that could have been avoided\naltogether, software teams can move faster and enjoy themselves at the same\ntime.\nLet me know what you think. Share your own experiences. Tweet me\n@LukeMorton."}},{"title":{"html":"In which I outline a strategy for Feature testing with rspec and capybara.
","plain":"In which I outline a strategy for Feature testing with rspec and capybara."},"slug":"feature-testing-in-2016","tags":["Ruby on Rails"],"publishedAt":{"pretty":"9th January 2016","iso":"2016-01-09T00:00:00.000+00:00"},"content":{"html":"In which I outline a strategy for Feature testing with rspec and capybara.
\nAt the end of last year I, along with friend, colleague and fellow islander\nDavid decided upon a set way of writing\nfeature tests across our rails projects. Based on frustrations with cucumber,\nregex and too much code sharing between scenarios the following strategy was\ndevised.
\nIt builds upon a blog post by Future Learn on\nwriting readable feature tests in rspec. Without further ado:
\n\n\nAlong with this structure there are some rules for keeping things tidy and\nmaintainable:
\n#assert_something
I wrote up more of the whys over on our\nMade Tech blog. This\nwas before some of our more stricter rules were put in place.
\nWhat do you think? Get in touch via twitter\n@LukeMorton.
","plain":"In which I outline a strategy for Feature testing with rspec and capybara.\nAt the end of last year I, along with friend, colleague and fellow islander\nDavid decided upon a set way of writing\nfeature tests across our rails projects. Based on frustrations with cucumber,\nregex and too much code sharing between scenarios the following strategy was\ndevised.\nIt builds upon a blog post by Future Learn on\nwriting readable feature tests in rspec. Without further ado:\n\n\nAlong with this structure there are some rules for keeping things tidy and\nmaintainable:\n\nOnly one "given/when/then" per scenario (never start a step with "and")\nNever reuse "given/when/then" steps between scenarios\nAlways define steps within the scope of the feature\nDefine lets after private declaration for separation\nAny shared logic between steps should be placed in private methods defined\nbelow your let statements\nComplicated or multiple assertions in your "then" steps should be placed\nin well named methods like #assert_something\nRely on lets rather than instance variables\n\nI wrote up more of the whys over on our\nMade Tech blog. This\nwas before some of our more stricter rules were put in place.\nWhat do you think? Get in touch via twitter\n@LukeMorton."}},{"title":{"html":"In which I provide a few links to help scale the M in MVC,\nthe ActiveRecord in rails.
","plain":"In which I provide a few links to help scale the M in MVC,\nthe ActiveRecord in rails."},"slug":"better-active-record-mileage","tags":["Ruby on Rails"],"publishedAt":{"pretty":"12th September 2015","iso":"2015-09-12T00:00:00.000+00:00"},"content":{"html":"In which I provide a few links to help scale the M in MVC,\nthe ActiveRecord in rails.
\nThe basis of this post comes from one tweet I read.
\nThe greatest trick the ORM ever pulled was convincing the world the DB doesn't exist... and it's a disaster for a generation of devs
— Brad Urani (@bradurani) September 6, 2015
\n\nI saw this tweet by Brad and had a response that I commonly have to positions\nor declarations in the world of software engineering.
\n\n\nThat's a bit extreme.
\n
More and more I have this view. I feel rather mellow. That said, I responded on\ntwitter almost a troll comment which immediately sinks me into having a view\nwhich again could be considered extreme.
\n@bradurani @Baranosky People are still shipping products though?
— Luke Morton (@LukeMorton) September 6, 2015
\n\nAh, isn't life full of ironic opportunities. Anyway...
\nPeople are still shipping products but I think Brad is right in a way. The more\nwe introduce engineers to the world of web development via rails the more\nabstracted away from the concepts of the database they are. In the world of\nsmall business, the one I choose to operate in, roles aren't well defined.\nFull stack is about as defined as my role can get since on any given day I can\nbe building out UI components with Sass/BEM/pieces, designing refund\nsystems for Spree applications, setting up continuous delivery practices for new\nclients, writing chef recipes, finding new hires or writing blog\nposts. I didn't even mention databases here or the scaling of your models\nwhich are yet more skills required for generalists.
\nFor small businesses and for people entering the world of rails (or whatever\nyour framework) it's easy to become a generalist and suffer the consequences of\nbecoming the master of none. We need to be mindful as our\napplications grow how to keep control of our ORMs.
\nBrad wrote a follow up post which I recommend you go read now before continuing.
\n\nThe author writes the common pitfalls of Active Record, statements of denial\nand provides some resources to how we might fix these problems. I'd like to add\nto the mix a bunch of resources I find useful for tackling these issues.
\nGreat article on how to break down models with a trip of patterns.\nAlthough this author introduces these concepts with the aid of a gem, I think\nwe can achieve these patterns without any additional dependencies.
\nhttp://victorsavkin.com/post/41016739721/building-rich-domain-models-in-rails-separating
\nPiotr wrote a great piece on the things he's learnt whilst being a rails\ndeveloper:
\nhttp://solnic.eu/2015/03/04/8-things-i-learned-during-8-years-of-ruby-and-rails.html
\nAlright, this is a plug to one of my blog posts at Made. It's on topic though\nand highlights how we might better use object oriented as well as functional\nprogramming practices to scale our models further.
\nhttps://www.madetech.com/blog/boundaries-in-object-oriented-design
\nRather than leaning on complex models we can instead lean on hashes or hash\nlike objects to transfer data around our applications.
\nI hesistated in posting this one since it's yet another list of design patterns.\nThen again, this whole blog post is about links to design patterns so it's\nincluded for completeness.
\nI'm going to write in the future on how design patterns are introducing more\nproblems to our applications through their blind use.
\nhttp://blog.codeclimate.com/blog/2012/10/17/7-ways-to-decompose-fat-activerecord-models/
\nBy using Query and Command objects we can avoid the necessity for callbacks\nwhich are often a cause for confusing bugs.
\nhttp://www.mattjohnston.co/blog/2013/07/07/dumb-data-objects/
\nIt's getting easier to use straight up SQL in rails.
\nhttps://github.com/rails/rails/pull/21536
\nUse ROM.rb instead!!
\n\nUse Lotus instead!!
\n\nOkay those last two links are more inspirational than aspirational. As engineers\nwho chose shops that use rails, we won't be escaping Active Record or rails in\ngeneral any time soon. Hopefully I've provided a few more links that\ncan help you scale out your models and ORMs further.
\nI haven't however provided resources to how you can utilise the power of your\ndatabase further. Or even learn as a rubyist how to be a DBA. One thing that\ncame out of my education was database normalisation a concept some developers\nhaven't heard of. Let the conversation continue...
","plain":"In which I provide a few links to help scale the M in MVC,\nthe ActiveRecord in rails.\nThe basis of this post comes from one tweet I read.\nThe greatest trick the ORM ever pulled was convincing the world the DB doesn't exist... and it's a disaster for a generation of devs— Brad Urani (@bradurani) September 6, 2015\n\n\n\nI saw this tweet by Brad and had a response that I commonly have to positions\nor declarations in the world of software engineering.\n\nThat's a bit extreme.\n\nMore and more I have this view. I feel rather mellow. That said, I responded on\ntwitter almost a troll comment which immediately sinks me into having a view\nwhich again could be considered extreme.\n@bradurani @Baranosky People are still shipping products though?— Luke Morton (@LukeMorton) September 6, 2015\n\n\n\nAh, isn't life full of ironic opportunities. Anyway...\nPeople are still shipping products but I think Brad is right in a way. The more\nwe introduce engineers to the world of web development via rails the more\nabstracted away from the concepts of the database they are. In the world of\nsmall business, the one I choose to operate in, roles aren't well defined.\nFull stack is about as defined as my role can get since on any given day I can\nbe building out UI components with Sass/BEM/pieces, designing refund\nsystems for Spree applications, setting up continuous delivery practices for new\nclients, writing chef recipes, finding new hires or writing blog\nposts. I didn't even mention databases here or the scaling of your models\nwhich are yet more skills required for generalists.\nFor small businesses and for people entering the world of rails (or whatever\nyour framework) it's easy to become a generalist and suffer the consequences of\nbecoming the master of none. We need to be mindful as our\napplications grow how to keep control of our ORMs.\nResources for getting along with your ORM\nBrad wrote a follow up post which I recommend you go read now before continuing.\nhttps://medium.com/@bradurani/turning-the-tables-how-to-get-along-with-your-object-relational-mapper-e5d2d6a76573\nThe author writes the common pitfalls of Active Record, statements of denial\nand provides some resources to how we might fix these problems. I'd like to add\nto the mix a bunch of resources I find useful for tackling these issues.\nTackling god objects with entities, data objects and repositories\nGreat article on how to break down models with a trip of patterns.\nAlthough this author introduces these concepts with the aid of a gem, I think\nwe can achieve these patterns without any additional dependencies.\nhttp://victorsavkin.com/post/41016739721/building-rich-domain-models-in-rails-separating\nLearn from others\nPiotr wrote a great piece on the things he's learnt whilst being a rails\ndeveloper:\nhttp://solnic.eu/2015/03/04/8-things-i-learned-during-8-years-of-ruby-and-rails.html\nUse OO boundaries more efficiently\nAlright, this is a plug to one of my blog posts at Made. It's on topic though\nand highlights how we might better use object oriented as well as functional\nprogramming practices to scale our models further.\nhttps://www.madetech.com/blog/boundaries-in-object-oriented-design\nUse data objects\nRather than leaning on complex models we can instead lean on hashes or hash\nlike objects to transfer data around our applications.\n\nhttp://brewhouse.io/2015/07/31/be-nice-to-others-and-your-future-self-use-data-objects.html\n/thoughts/2013-09-23-hashes-for-data\n\nMoar patterns\nI hesistated in posting this one since it's yet another list of design patterns.\nThen again, this whole blog post is about links to design patterns so it's\nincluded for completeness.\nI'm going to write in the future on how design patterns are introducing more\nproblems to our applications through their blind use.\nhttp://blog.codeclimate.com/blog/2012/10/17/7-ways-to-decompose-fat-activerecord-models/\nAvoid callbacks\nBy using Query and Command objects we can avoid the necessity for callbacks\nwhich are often a cause for confusing bugs.\nhttp://www.mattjohnston.co/blog/2013/07/07/dumb-data-objects/\nUse SQL prepared statements in rails 5\nIt's getting easier to use straight up SQL in rails.\nhttps://github.com/rails/rails/pull/21536\nAvoid ActiveRecord\nUse ROM.rb instead!!\nhttp://rom-rb.org/\nAvoid rails\nUse Lotus instead!!\nhttp://lotusrb.org/\nConclusion\nOkay those last two links are more inspirational than aspirational. As engineers\nwho chose shops that use rails, we won't be escaping Active Record or rails in\ngeneral any time soon. Hopefully I've provided a few more links that\ncan help you scale out your models and ORMs further.\nI haven't however provided resources to how you can utilise the power of your\ndatabase further. Or even learn as a rubyist how to be a DBA. One thing that\ncame out of my education was database normalisation a concept some developers\nhaven't heard of. Let the conversation continue..."}},{"title":{"html":"Where I explain what I've been up to.
","plain":"Where I explain what I've been up to."},"slug":"hiatus-over","tags":[],"publishedAt":{"pretty":"19th July 2015","iso":"2015-07-19T00:00:00.000+00:00"},"content":{"html":"Where I explain what I've been up to.
\nWhen I moved from the startup world of uncertainty to a little more certain\nworld of delivering web apps, e-commerce and otherwise, my writing\nslowed. Getting your knowledge down, distilling it into written form is an\nimportant part of consuming information. So here I am back ready to distill\nmy current thought processes about the world wide web.
\nMoving to Made, introducing continuous delivery pipelines,\nbringing agile rails to teams who want to modernise, writing for\nMade's blog and hiring fledgling developers has been\na career changer for me. It's all very exciting stuff!
\nThe opportunities presented at Made Tech are eye openers and exciting but I'm\nalso very busy. Being pushed into blogging with the rest of our team has only\nhighlighted the neglect I've been giving my own site. I am pretty opinionated\nbut I also hope I'm a conduit for interesting conversations, I want to put more\neffort into my personal blogging.
\nOver the past year and a half my idealism has been balanced out with a healthy\nsense of pragmatism. I am a fan of using the tools\navailable rather than reinventing several\nwheels. That said, I always have my idealism as an end goal, it's just one\nI'm getting more and more relaxed about not reaching.
\nFunctional programming is still having a huge impact on the way I see systems\nand components interacting. I'm an avid fan of map/reduce and using more basic\nconstructs of arrays and hashes eeks into my rails work. I'm testing like crazy\nthese days at both the feature and unit level.
\nAs I work with teams and improve our workflows, my understanding of agile is\ngrowing. There are no magic bullets to controlling and delivering projects, but\nthe understanding that it's always a human issue and that conversation beats\nany tool and process is now engrained in me.
\nI hope over the coming weeks and months I will be able to share some of my more\npersonal journeys through the web.
","plain":"Where I explain what I've been up to.\nWhen I moved from the startup world of uncertainty to a little more certain\nworld of delivering web apps, e-commerce and otherwise, my writing\nslowed. Getting your knowledge down, distilling it into written form is an\nimportant part of consuming information. So here I am back ready to distill\nmy current thought processes about the world wide web.\nMoving to Made, introducing continuous delivery pipelines,\nbringing agile rails to teams who want to modernise, writing for\nMade's blog and hiring fledgling developers has been\na career changer for me. It's all very exciting stuff!\nThe opportunities presented at Made Tech are eye openers and exciting but I'm\nalso very busy. Being pushed into blogging with the rest of our team has only\nhighlighted the neglect I've been giving my own site. I am pretty opinionated\nbut I also hope I'm a conduit for interesting conversations, I want to put more\neffort into my personal blogging.\nOver the past year and a half my idealism has been balanced out with a healthy\nsense of pragmatism. I am a fan of using the tools\navailable rather than reinventing several\nwheels. That said, I always have my idealism as an end goal, it's just one\nI'm getting more and more relaxed about not reaching.\nFunctional programming is still having a huge impact on the way I see systems\nand components interacting. I'm an avid fan of map/reduce and using more basic\nconstructs of arrays and hashes eeks into my rails work. I'm testing like crazy\nthese days at both the feature and unit level.\nAs I work with teams and improve our workflows, my understanding of agile is\ngrowing. There are no magic bullets to controlling and delivering projects, but\nthe understanding that it's always a human issue and that conversation beats\nany tool and process is now engrained in me.\nI hope over the coming weeks and months I will be able to share some of my more\npersonal journeys through the web."}},{"title":{"html":"An explanation as to why I don't like more than one public\nmethod per class.
","plain":"An explanation as to why I don't like more than one public\nmethod per class."},"slug":"more-methods-more-problems","tags":[],"publishedAt":{"pretty":"14th November 2013","iso":"2013-11-14T00:00:00.000+00:00"},"content":{"html":"An explanation as to why I don't like more than one public\nmethod per class.
\nI've written about this before. If your classes are\ngoing to have a single responsibility why offer more than\none way to perform that responsibility?
\nMultiple methods per class – and by this I mean publically\nexposed ones – cause problems in a number of ways.
\nFirstly methods should do something. If your class has\nmultiple public methods it will likely be doing multiple\nthings.
\n\n\nSo here we have a large model UserModel
. You should already\nhave your nose up at this unimplemented class. It does too\nmuch. The methods #register
, #login
and #update_profile
\nmight have logic in common but they are very different and\nhave different responsibilities. Having all these methods in\none class means you will have some shared logic in private\nmethods but a hell of a lot of specific private methods that\naren't used by the other public methods.
Using the Data component of IDV you could create\nthree data actions:
\n\n\nThey might share some logic but package that logic up in\nanother class they all share rather than putting all this\nlogic in one class.
\nYou could share logic by an abstract class but this isn't wise\nin the long run. Inheritence should be avoided as much as\nmultiple public methods. Multiple responsibilities and\nextension of abstract (or even worse concrete) classes are\nexamples of coupling and aren't as flexible as dependency\ninjection.
\n\n\nInject shared logic at runtime rather than couple your code\nall the time
\n
Let's move onto an exception. Sometimes your methods might be completely related to one another. The only two examples of\nvalid multi-method classes I can think of are\nInteraction Controllers and Data Mappers.
\nLet's take a user data mapper for example.
\n\n\nSo why do I think this is okay? Well firstly a mongo specific\ndata mapper for a user is a pretty specific responsibility.\nThe class does not have one single responsibility though. It\nhas the responsibility of finding one document by ID and many\ndocuments by an array of IDs. Two responsibilities but I still\nthink this is okay and let me explain why.
\nThe methods #find_one_by_id
and #find_by_ids
are\nstandalone but will share the collection instance injected so\nthis is one bit of logic that would need to be repeated or\ninherited if we split this class into two.
Both methods share the state initialised on construction, the collection, however they are still fairly independent and\natomic. I see these methods as single responsibilties packaged\nunder a single namespace UserMongoDataMapper
. As long as the methods remain SRP and share the majority of logic within the\ndata mapper then they can remain in one class.
So we've now identified an exception – that is – when methods\nare independent, atomic and share most private logic in the\nclass then it might be okay to keep them in one object.
\nAtomicity is important. I might have just made that word up\nso I'll define it. When calling the method it should be\ntotally independent and rely on no shared state with other\npublic methods. If by calling #find_one_by_id
affected a\nlater call to #find_by_ids
then these methods would not be\natomic. They are coupled and definitely not single\nresponsibility. Furthermore leaking these implementation side\neffects into your application means you are introducing hidden\ncoupling into your application. Little secrets such as the\nside effects of calling methods of an instance in different\norders lead to many subtle bugs. Just don't do it!
Multiple public methods make a class more difficult to reason\nabout. The developer using it will need to know when to use\nwhat methods, the interfaces for each method and so will your\ncode. The more methods in your program the more coupled to implementation your application will become. This should be\nobvious:
\n\n\nThe more code you write the more problems you are going to\nhave so don't write as much
\n
I'm going to quickly summarise the points I've made so you can\nargue in favour of the statement "more methods, more\nproblems".
\nThe OOP lot like to hide complexity in pretty looking\nchainable fluid interfaces. That's an ironically complex\nsolution for a problem aimed at reducing complexity.
\nI know what the magicians are saying, "a class with a few\nsetters and getters is hardly complex."
\nMaybe not but I'm not buying your evil magic friend. You and\nyour tempting class of tricks can stay away from my\napplication party.
","plain":"An explanation as to why I don't like more than one public\nmethod per class.\nI've written about this before. If your classes are\ngoing to have a single responsibility why offer more than\none way to perform that responsibility?\nMultiple methods per class – and by this I mean publically\nexposed ones – cause problems in a number of ways.\nMultiple responsibilities\nFirstly methods should do something. If your class has\nmultiple public methods it will likely be doing multiple\nthings.\n\n\nSo here we have a large model UserModel. You should already\nhave your nose up at this unimplemented class. It does too\nmuch. The methods #register, #login and #update_profile\nmight have logic in common but they are very different and\nhave different responsibilities. Having all these methods in\none class means you will have some shared logic in private\nmethods but a hell of a lot of specific private methods that\naren't used by the other public methods.\nUsing the Data component of IDV you could create\nthree data actions:\n\n\nThey might share some logic but package that logic up in\nanother class they all share rather than putting all this\nlogic in one class.\nYou could share logic by an abstract class but this isn't wise\nin the long run. Inheritence should be avoided as much as\nmultiple public methods. Multiple responsibilities and\nextension of abstract (or even worse concrete) classes are\nexamples of coupling and aren't as flexible as dependency\ninjection.\n\nInject shared logic at runtime rather than couple your code\nall the time\n\nAn Exception\nLet's move onto an exception. Sometimes your methods might be completely related to one another. The only two examples of\nvalid multi-method classes I can think of are\nInteraction Controllers and Data Mappers.\nLet's take a user data mapper for example.\n\n\nSo why do I think this is okay? Well firstly a mongo specific\ndata mapper for a user is a pretty specific responsibility.\nThe class does not have one single responsibility though. It\nhas the responsibility of finding one document by ID and many\ndocuments by an array of IDs. Two responsibilities but I still\nthink this is okay and let me explain why.\nThe methods #find_one_by_id and #find_by_ids are\nstandalone but will share the collection instance injected so\nthis is one bit of logic that would need to be repeated or\ninherited if we split this class into two.\nBoth methods share the state initialised on construction, the collection, however they are still fairly independent and\natomic. I see these methods as single responsibilties packaged\nunder a single namespace UserMongoDataMapper. As long as the methods remain SRP and share the majority of logic within the\ndata mapper then they can remain in one class.\nSo we've now identified an exception – that is – when methods\nare independent, atomic and share most private logic in the\nclass then it might be okay to keep them in one object.\nAtomic\nAtomicity is important. I might have just made that word up\nso I'll define it. When calling the method it should be\ntotally independent and rely on no shared state with other\npublic methods. If by calling #find_one_by_id affected a\nlater call to #find_by_ids then these methods would not be\natomic. They are coupled and definitely not single\nresponsibility. Furthermore leaking these implementation side\neffects into your application means you are introducing hidden\ncoupling into your application. Little secrets such as the\nside effects of calling methods of an instance in different\norders lead to many subtle bugs. Just don't do it!\nThe obvious\nMultiple public methods make a class more difficult to reason\nabout. The developer using it will need to know when to use\nwhat methods, the interfaces for each method and so will your\ncode. The more methods in your program the more coupled to implementation your application will become. This should be\nobvious:\n\nThe more code you write the more problems you are going to\nhave so don't write as much\n\nSummary\nI'm going to quickly summarise the points I've made so you can\nargue in favour of the statement "more methods, more\nproblems".\n\nMethods have a single responsibility, having multiple\nmethods per class means the class does not have a single\nresponsibility\nMethods may share logic with related methods but they will\nalso have independent logic – coupling related methods is\na messy way to share logic between components –\ntry injecting logic instead\nMethods should be atomic operations, if they aren't then\nyou'll be introducing hidden coupling (think method call\norder) and subtle bugs into your application\nThe more methods you write, the more code your application\nwill have, the more the code, the more the bugs\n\nThe OOP lot like to hide complexity in pretty looking\nchainable fluid interfaces. That's an ironically complex\nsolution for a problem aimed at reducing complexity.\nI know what the magicians are saying, "a class with a few\nsetters and getters is hardly complex."\nMaybe not but I'm not buying your evil magic friend. You and\nyour tempting class of tricks can stay away from my\napplication party."}},{"title":{"html":"That's right. It's time to leave your frameworks behind\nyou.
","plain":"That's right. It's time to leave your frameworks behind\nyou."},"slug":"sans-framework-generation","tags":[],"publishedAt":{"pretty":"28th September 2013","iso":"2013-09-28T00:00:00.000+00:00"},"content":{"html":"That's right. It's time to leave your frameworks behind\nyou.
\nThis isn't advice. Okay it is. But you seriously need to think\nabout what I'm about to say. Read and reread the following\nstatement.
\n\n\nFrameworks aren't bad, but being locked into them is
\n
What do I mean by this? I mean bad things come from projects\nthat get locked to a framework. By locked I mean coupled. Slow\nrails tests anyone? Difficulty deconstructing applications\ninto smaller services due to reliance on a particular way of\ndoing something? Decided omakase isn't for you?
\nWhatever the problem it comes down to locking yourself in.\nVendor lock in is shitty. When your entire business gives\nitself to one vendor it's a risk.
\nThere's a better way. Write your business logic before\nchoosing a framework. Work out your wireframes, build HTML\nprototypes, do some TDD for your user stories. The key is to\ndefer the framework decision. Hell, defer all your\nbase.
\nI'm serious here. Why write your framework code first? How\ndoes it make any sense to do something the rails way? You\nshould do it your applications way. That doesn't mean your\napplication logic won't fit into the rails paradigm. Just\nwrite your application logic so it doesn't care for what\ninterface it uses to deliver content to the user. Rails does\nthis particularly poorly since you end up using a lot of logic\nprovided by it's framework. To get the benefits of rails you\ndo really have to go the rails way, but then you're fucked.
\nYou decide where you stand but I'm of the sans framework\ngeneration.
\nComments to @LukeMorton please.
","plain":"That's right. It's time to leave your frameworks behind\nyou.\nThis isn't advice. Okay it is. But you seriously need to think\nabout what I'm about to say. Read and reread the following\nstatement.\n\nFrameworks aren't bad, but being locked into them is\n\nWhat do I mean by this? I mean bad things come from projects\nthat get locked to a framework. By locked I mean coupled. Slow\nrails tests anyone? Difficulty deconstructing applications\ninto smaller services due to reliance on a particular way of\ndoing something? Decided omakase isn't for you?\nWhatever the problem it comes down to locking yourself in.\nVendor lock in is shitty. When your entire business gives\nitself to one vendor it's a risk.\nThere's a better way. Write your business logic before\nchoosing a framework. Work out your wireframes, build HTML\nprototypes, do some TDD for your user stories. The key is to\ndefer the framework decision. Hell, defer all your\nbase.\nI'm serious here. Why write your framework code first? How\ndoes it make any sense to do something the rails way? You\nshould do it your applications way. That doesn't mean your\napplication logic won't fit into the rails paradigm. Just\nwrite your application logic so it doesn't care for what\ninterface it uses to deliver content to the user. Rails does\nthis particularly poorly since you end up using a lot of logic\nprovided by it's framework. To get the benefits of rails you\ndo really have to go the rails way, but then you're fucked.\nYou decide where you stand but I'm of the sans framework\ngeneration.\nComments to @LukeMorton please."}},{"title":{"html":"Introducing the Interaction, Data and View\ndesign pattern.
","plain":"Introducing the Interaction, Data and View\ndesign pattern."},"slug":"IDV","tags":[],"publishedAt":{"pretty":"27th September 2013","iso":"2013-09-27T00:00:00.000+00:00"},"content":{"html":"Introducing the Interaction, Data and View\ndesign pattern.
\n\n\nIDV = I + D + V
\n
Every project I've been employed to work on so far I've taken\non the responsibility for reorganising brownfield and\ndesigning the architecture for greenfield applications. The\nIDV pattern comes from my frustration of change. Not that I\ndon't like change – I love what change brings. No, it's the\nfact change is hard.
\n\n\nChange should be cheap
\n
I really believe this. Change happens and is required in\nbusiness. Applications represent businesses. Applications need\nto be changed for business reasons. If change is inevitable,\nshouldn't we prepare for it? What would being prepared for\nchange look like? Or, to put it another way. What does not\nbeing prepared for change look like?
\nFace it, it looks a mess and we've all been there.
\nLike I said, IDV came from a frustration of change caused by\nthe problems listed above. So I have potential solutions. In\nfact I'd call them working solutions because I've developed\nand utilised them over time.
\nThe main premise is the separation of concerns by introducing\nclear boundaries to your business and application logic. By\nusing a small set of interfaces we can avoid a lot of these\nproblems of change.
\nI say small set, it's purposefully a small set. The idea is\nthat the interfaces you introduce will not change. So we\ndefine broad and generic method interfaces so these unchanging\nparts never cause a problem.
\nWhat does an unchanging interface look like?
\nAn unchanging interface will be method based rather than\nclass based so that we can switch implementations at\nanytime.
\nMethod names will be generic enough not to hint any\nimplementation details.
\nThe only parameter will be a hash. This allows any and all\nkinds of information to be passed in. (At least in\ndynamically typed languages.)
\nThe only returned value will also be a hash. This means\nanyone can consume the data returned. Hashes and scalar\nvalues should be the only values returned within the main\nhash.
\nA flexible unchanging interface means that communication\nbetween areas of your application can happen in a limited way.\nThis means change to logic inside the interfaces are fully\nencapsulated and are less likely to cause problems in other\nareas of the application.
\nWe do however need to decide where to draw these interface\nlines.
\nIn order to decide where to implement these unchanging\ninterfaces we need to understand the separation of concerns.\nWe need to work out where to separate. The answer is in the\nname. We need to separate out the parts that aren't concerned\nwith how the other parts do things.
\nHow do most web frameworks split out application logic? MVC!\nMVC tells us that Model, View and Controller logic are\nseparate concerns.
\nMVC fails with the unchanging interface criteria.\nModels in web application world often have many methods and\nare used in views and controllers. Controllers can have many\nactions and views have many methods and often mix logic with\ntemplates or logic in controllers. This makes for disaster\nbecause the interface of a class is the public methods it\nexposes. The more methods exposed the more weakness your\napplication will have to the changes made in these methods.
\n\n\nMVC is a bastardised separation of concerns
\n
So what areas is MVC trying to tackle? Uncle Bob tells us\nthat the business logic should be wrapped in interface logic.\nSo we have business logic, M. Snug in between the interface\nand business logic is a view layer for presenting the business\ndata and logic, that's V. C is the controller layer which\nis basically the communication of the areas of the application\nto the interface by which the application is delivered.
\nI'd like to define these better. And not in some weird circle\ndiagram. I'd still split it into three layers like MVC:
\nOr in other words IDV: Interaction, Data and View.
\nThis isn't clearly defined enough yet though. Each one of\nthese concerns have several concerns themselves. We'll discuss\neach section and their concerns briefly.
\nInteraction is the application itself. It is the delivery\nmechanism for the applications content. It is also the\nlayer that communicates with the domain in order to produce\nthe applications content. That's two concerns right there.
\nOr in two simple terms Application and Controller.
\n\n\nI = A + C
\n
Typicalling routing and protocol handling will be done in the\napplication layer. This layer will then communicate with one\nor more controllers. The controllers will then interact with\nthe data and view layers and produce a response which is then\nreturned back to the application for delivery to the user.
\nI have written more about the interaction layer if you\nwish to find out more.
\nData is the core business logic. It deals with asking business\nquestions of data. It also handles the inserting, updating and\ndeletion of business data. Along with this business logic it\nalso handles the communication with the data sources of the\napplication. Three concerns.
\nOr more simply Mapper, Model and Action.
\n\n\nD = Ma + Mo + A
\n
Models and actions will be called by a controller. The\ncontroller will pass in mappers and other request information\nto the models and actions and pass their responses into the\nview layer or immediately return control to the application\nlayer.
\nI have written more about the data layer if you wish to\nfind out more.
\nView is the translation of business data into a presentation\nfor the user. It handles the structure of data for\npresentation. It also handles the modelling of data for\npresentation. Along with these responsibilities it also needs\nto merge the modelled data into the structure. Three concerns\njust like data.
\nOr more simply Template, Model and\nTemplate Engine.
\n\n\nV = T + M + TE
\n
The controller will first pass data from the view layer and\nrequest into the view model. It will then use the template\nengine to merge this view model with a template. This data\nwill then be formed into a response and passed back to the\napplication layer.
\nI have written more about the view layer if you wish to\nfind out more.
\nUsing unchanging interfaces between each one of the sub\nconcerns defined above will allow you to substitute each\nconcern as per the Liskov substitution principle. Being\nable to switch out each component separately means they can\nbe tested independently, parts can be replaced without\naffecting other sections of the application and you only have\nto introduce 8 types of interfaces to your entire application.
\n\n\nIDV = (A + C) (Ma + Mo + A) (T + M + TE)
\n
This article was more theory than code examples so I apologise\nfor that. You'll find in the links throughout code examples of\neach layer of IDV. If I tried to fit them all into this one\narticle there would have been trouble.
\nLet me know what you think @LukeMorton.
","plain":"Introducing the Interaction, Data and View\ndesign pattern.\n\nIDV = I + D + V\n\nEvery project I've been employed to work on so far I've taken\non the responsibility for reorganising brownfield and\ndesigning the architecture for greenfield applications. The\nIDV pattern comes from my frustration of change. Not that I\ndon't like change – I love what change brings. No, it's the\nfact change is hard.\n\nChange should be cheap\n\nI really believe this. Change happens and is required in\nbusiness. Applications represent businesses. Applications need\nto be changed for business reasons. If change is inevitable,\nshouldn't we prepare for it? What would being prepared for\nchange look like? Or, to put it another way. What does not\nbeing prepared for change look like?\n\nThings are tightly coupled\nLocked to vendors (concrete implementations everywhere)\nLeaky encapsulation\nConfusing or undocumented interfaces\n\nFace it, it looks a mess and we've all been there.\nLike I said, IDV came from a frustration of change caused by\nthe problems listed above. So I have potential solutions. In\nfact I'd call them working solutions because I've developed\nand utilised them over time.\nThe main premise is the separation of concerns by introducing\nclear boundaries to your business and application logic. By\nusing a small set of interfaces we can avoid a lot of these\nproblems of change.\nI say small set, it's purposefully a small set. The idea is\nthat the interfaces you introduce will not change. So we\ndefine broad and generic method interfaces so these unchanging\nparts never cause a problem.\nUnchanging interfaces\nWhat does an unchanging interface look like?\n\nAn unchanging interface will be method based rather than\nclass based so that we can switch implementations at\nanytime.\n\nMethod names will be generic enough not to hint any\nimplementation details.\n\nThe only parameter will be a hash. This allows any and all\nkinds of information to be passed in. (At least in\ndynamically typed languages.)\n\nThe only returned value will also be a hash. This means\nanyone can consume the data returned. Hashes and scalar\nvalues should be the only values returned within the main\nhash.\n\n\nA flexible unchanging interface means that communication\nbetween areas of your application can happen in a limited way.\nThis means change to logic inside the interfaces are fully\nencapsulated and are less likely to cause problems in other\nareas of the application.\nWe do however need to decide where to draw these interface\nlines.\nSeparation of concerns\nIn order to decide where to implement these unchanging\ninterfaces we need to understand the separation of concerns.\nWe need to work out where to separate. The answer is in the\nname. We need to separate out the parts that aren't concerned\nwith how the other parts do things.\nHow do most web frameworks split out application logic? MVC!\nMVC tells us that Model, View and Controller logic are\nseparate concerns.\nMVC fails with the unchanging interface criteria.\nModels in web application world often have many methods and\nare used in views and controllers. Controllers can have many\nactions and views have many methods and often mix logic with\ntemplates or logic in controllers. This makes for disaster\nbecause the interface of a class is the public methods it\nexposes. The more methods exposed the more weakness your\napplication will have to the changes made in these methods.\n\nMVC is a bastardised separation of concerns\n\nSo what areas is MVC trying to tackle? Uncle Bob tells us\nthat the business logic should be wrapped in interface logic.\nSo we have business logic, M. Snug in between the interface\nand business logic is a view layer for presenting the business\ndata and logic, that's V. C is the controller layer which\nis basically the communication of the areas of the application\nto the interface by which the application is delivered.\nI'd like to define these better. And not in some weird circle\ndiagram. I'd still split it into three layers like MVC:\n\nThe protocol for delivering the application\nThe data layer for applying business logic to data\nThe view layer for presentation\n\nOr in other words IDV: Interaction, Data and View.\nThis isn't clearly defined enough yet though. Each one of\nthese concerns have several concerns themselves. We'll discuss\neach section and their concerns briefly.\nInteraction\nInteraction is the application itself. It is the delivery\nmechanism for the applications content. It is also the\nlayer that communicates with the domain in order to produce\nthe applications content. That's two concerns right there.\n\nInteraction with user over protocol communication\nInteraction with the business (domain) layer\n\nOr in two simple terms Application and Controller.\n\nI = A + C\n\nTypicalling routing and protocol handling will be done in the\napplication layer. This layer will then communicate with one\nor more controllers. The controllers will then interact with\nthe data and view layers and produce a response which is then\nreturned back to the application for delivery to the user.\nI have written more about the interaction layer if you\nwish to find out more.\nData\nData is the core business logic. It deals with asking business\nquestions of data. It also handles the inserting, updating and\ndeletion of business data. Along with this business logic it\nalso handles the communication with the data sources of the\napplication. Three concerns.\n\nInteraction with data sources\nModelling of data for answering business questions\nPerforming business actions on data\n\nOr more simply Mapper, Model and Action.\n\nD = Ma + Mo + A\n\nModels and actions will be called by a controller. The\ncontroller will pass in mappers and other request information\nto the models and actions and pass their responses into the\nview layer or immediately return control to the application\nlayer.\nI have written more about the data layer if you wish to\nfind out more.\nView\nView is the translation of business data into a presentation\nfor the user. It handles the structure of data for\npresentation. It also handles the modelling of data for\npresentation. Along with these responsibilities it also needs\nto merge the modelled data into the structure. Three concerns\njust like data.\n\nStructuring data for presentation\nModelling data for presentation\nMerging the structure and modelled data for presentation\n\nOr more simply Template, Model and\nTemplate Engine.\n\nV = T + M + TE\n\nThe controller will first pass data from the view layer and\nrequest into the view model. It will then use the template\nengine to merge this view model with a template. This data\nwill then be formed into a response and passed back to the\napplication layer.\nI have written more about the view layer if you wish to\nfind out more.\nSummary\nUsing unchanging interfaces between each one of the sub\nconcerns defined above will allow you to substitute each\nconcern as per the Liskov substitution principle. Being\nable to switch out each component separately means they can\nbe tested independently, parts can be replaced without\naffecting other sections of the application and you only have\nto introduce 8 types of interfaces to your entire application.\n\nIDV = (A + C) (Ma + Mo + A) (T + M + TE)\n\nThis article was more theory than code examples so I apologise\nfor that. You'll find in the links throughout code examples of\neach layer of IDV. If I tried to fit them all into this one\narticle there would have been trouble.\nLet me know what you think @LukeMorton."}},{"title":{"html":"Some thoughts on application interaction. This is your\napplication logic and controllers.
","plain":"Some thoughts on application interaction. This is your\napplication logic and controllers."},"slug":"interaction","tags":[],"publishedAt":{"pretty":"26th September 2013","iso":"2013-09-26T00:00:00.000+00:00"},"content":{"html":"Some thoughts on application interaction. This is your\napplication logic and controllers.
\nSo you've got your data and view triads written and\ntested. How are you supposed to serve them via HTTP to your\nusers? This is where the interaction layer comes into play.
\nApplications are interfaces to your business logic. They can\nbe HTTP (web) applications, command line clients, bare bones\nTCP, ZeroMQ or whatever!
\nLet's start with a sinatra application.
\n\n\nOkay so I've provided you with a hello world for sinatra and\nminimal business logic. No data or view logic. However I've\nalready talked that malarkey. This example is an application.\nIt is the interface for HTTP that business logic requires.
\nIt is however only suitable for limited life shelf\napplications or tiny ones. We need to introduce the other\npart of interaction now, controllers.
\n\n\nSo now we've added a controller to contain our (albeit over\nsimplistic) business logic. This abstraction allows us to\nseparate our application interface, in this case sinatra\nrouting, and our business logic execution.
\nLet's go over two points:
\nAhhh, separations of concerns.
\nApplications know about receiving requests and serving\nresponses. They don't care about business logic\nimplementation. It never need know about your mappers, data\nand view models, etc. All an application needs to do is pass\nrequest values into a controller and receive a response back.
\nControllers talk to your business logic and almalgamate it for\napplication consumption. A controller might have a #view
\nmethod for GET requests in HTTP, or #action
for POST\nrequests but these method names should be protocol and thus\napplication agnostic. A view can be served over many protocols\nand so can actions be received.
This separation then gives you the flexibility to swap out\napplications. Start with Sinatra, move to Rails, realise it's\na piece of junk, move to Padrino and then settle on Sinatra\nwith some Padrino components. Your controllers should not need\nto be changed. Your application layer should normalise\nrequests into hashes so that your controllers can remain\nignorant of them.
\nThink of the additional power this provides. Being able to\nreplace your application layer with anything. Say a test\nlayer? You can run your entire application stack from tests\nwithout requiring a framework at all! Never again will you\nhave to put up with slow rails integration tests.
\nWith this abstraction of interaction you will be able to grow\nyour entire application without ever being stuck to a\nparticular vendor. I'm not against using rails. I'm against\nbeing stuck to it. This pattern described here will keep you\nfree of those constraints.
\nI use this equation to describe the interaction layer.
\n\n\nI = A + C
\n
I've previously written about the data and view\ntriads. Let me know what you think of the I, D or the V that\nI've written about @LukeMorton.
","plain":"Some thoughts on application interaction. This is your\napplication logic and controllers.\nSo you've got your data and view triads written and\ntested. How are you supposed to serve them via HTTP to your\nusers? This is where the interaction layer comes into play.\nApplications are interfaces to your business logic. They can\nbe HTTP (web) applications, command line clients, bare bones\nTCP, ZeroMQ or whatever!\nLet's start with a sinatra application.\n\n\nOkay so I've provided you with a hello world for sinatra and\nminimal business logic. No data or view logic. However I've\nalready talked that malarkey. This example is an application.\nIt is the interface for HTTP that business logic requires.\nIt is however only suitable for limited life shelf\napplications or tiny ones. We need to introduce the other\npart of interaction now, controllers.\n\n\nSo now we've added a controller to contain our (albeit over\nsimplistic) business logic. This abstraction allows us to\nseparate our application interface, in this case sinatra\nrouting, and our business logic execution.\nBrief roundup\nLet's go over two points:\n\napplications serve your business logic to users\ncontrollers provide business logic to applications\n\nAhhh, separations of concerns.\nApplications know about receiving requests and serving\nresponses. They don't care about business logic\nimplementation. It never need know about your mappers, data\nand view models, etc. All an application needs to do is pass\nrequest values into a controller and receive a response back.\nControllers talk to your business logic and almalgamate it for\napplication consumption. A controller might have a #view\nmethod for GET requests in HTTP, or #action for POST\nrequests but these method names should be protocol and thus\napplication agnostic. A view can be served over many protocols\nand so can actions be received.\nThis separation then gives you the flexibility to swap out\napplications. Start with Sinatra, move to Rails, realise it's\na piece of junk, move to Padrino and then settle on Sinatra\nwith some Padrino components. Your controllers should not need\nto be changed. Your application layer should normalise\nrequests into hashes so that your controllers can remain\nignorant of them.\nThink of the additional power this provides. Being able to\nreplace your application layer with anything. Say a test\nlayer? You can run your entire application stack from tests\nwithout requiring a framework at all! Never again will you\nhave to put up with slow rails integration tests.\nFinal notes\nWith this abstraction of interaction you will be able to grow\nyour entire application without ever being stuck to a\nparticular vendor. I'm not against using rails. I'm against\nbeing stuck to it. This pattern described here will keep you\nfree of those constraints.\nI use this equation to describe the interaction layer.\n\nI = A + C\n\nI've previously written about the data and view\ntriads. Let me know what you think of the I, D or the V that\nI've written about @LukeMorton."}},{"title":{"html":"Some thoughts on the data triad. That is mappers,\nmodels and actions.
","plain":"Some thoughts on the data triad. That is mappers,\nmodels and actions."},"slug":"data","tags":[],"publishedAt":{"pretty":"25th September 2013","iso":"2013-09-25T00:00:00.000+00:00"},"content":{"html":"Some thoughts on the data triad. That is mappers,\nmodels and actions.
\nThe data triad is all about setting and getting data to and\nfrom data sources. The first part of the triad, mappers, are\nall about communicating with data sources.
\n\n\nSo as you can see mappers do a fair bit of the leg work. They\nencapsulate communication with a data source by providing a\nnumber of methods.
\nThe example above takes a hash of data on construction and\nit's method selects or inserts hashes from that hash.
\nData models use mapper methods to select data. They then model\nthis data to be consumed. In a web application a model would\nbe called by a controller and used to determine the outcome of\na request. The controller can also pass the modelled data to\nview models for further view related transformation.
\nYou might have a data model to get user data for a profile.
\n\n\nWe load a user hash by calling #find_one_by_id
on an\ninjected mapper. We then have the replacement of\nuser[:friends]
with a selection of other user data using the\ninjected mapper's #find_by_ids
. The completed user hash is\nthen returned as part of another hash.
As you can see data models are used to build up data and model\nit for consumption. Usually for a specific use case.
\nSo you might have guessed what data actions do. They act upon\ndata sources via data mappers. They insert, update and delete.
\n\n\nSo I included a couple more classes to make this a complete\nexample. You shouldn't really put data into a source without\nvalidating it first you know!
\nThe main point of discussion however is\nRegisterUserDataAction
. It uses a validator to ensure the\ndata is good to insert. If it isn't it'll encapsulate the\nvalidator in an error and raise it. Alternatively you could\nreturn a hash of error data. If the data was fine this will\nthen get passed to #insert
, another mapper method. You might\nalso send welcome emails at this point which would happen on a\nservice injected into the #exec
method just like the\nmappers.
So that's that. A quick summary:
\nJust like the separation of concerns with T + M + TE
\nwe get all the goodness of modular design.
Mappers define a source agnostic interface for communicating\nwith data sources. They should be written agnostically so you\ncan replace them with alternate mappers for other data\nsources. They should always be injected into models and\nactions so you can replace them. Mappers could potentially\ninterface with ActiveRecord, ROM or more\ndirectly with data layers such as Sequel or Mongo.
\nModels aren't your ActiveRecord model. They take in mappers\nand data for querying and return a hash model. This is why the\nmethod is called #to_hash
after all. Models are directly\ncoupled to the mapper methods they call. This is to be\nexpected and of course the model only exposes one public\nmethod so the mapper implementation is hidden from the\napplication controller or where ever your model is used.
Actions also aren't your ActiveRecord model. They take in\nmappers and data to insert, update or delete records in a data\nsource. They too, like models, are directly coupled with the\nmapper methods but again only have one public interface\nthemselves #exec
. Ideally actions will validate the data\nbefore acting on it. They needn't be tied to the validator\nclass like the example above, of course this could have also\nbeen injected.
I've been calling this the data triad. I also refer to these\ncomponents with the following equation.
\n\n\nD = Ma + Mo + A
\n
Also I've previously written about the view triad. Let me\nknow what you think of the D or the V that I've written about\n@LukeMorton.
","plain":"Some thoughts on the data triad. That is mappers,\nmodels and actions.\nThe data triad is all about setting and getting data to and\nfrom data sources. The first part of the triad, mappers, are\nall about communicating with data sources.\n\n\nSo as you can see mappers do a fair bit of the leg work. They\nencapsulate communication with a data source by providing a\nnumber of methods.\nThe example above takes a hash of data on construction and\nit's method selects or inserts hashes from that hash.\nData models use mapper methods to select data. They then model\nthis data to be consumed. In a web application a model would\nbe called by a controller and used to determine the outcome of\na request. The controller can also pass the modelled data to\nview models for further view related transformation.\nYou might have a data model to get user data for a profile.\n\n\nWe load a user hash by calling #find_one_by_id on an\ninjected mapper. We then have the replacement of\nuser[:friends] with a selection of other user data using the\ninjected mapper's #find_by_ids. The completed user hash is\nthen returned as part of another hash.\nAs you can see data models are used to build up data and model\nit for consumption. Usually for a specific use case.\nSo you might have guessed what data actions do. They act upon\ndata sources via data mappers. They insert, update and delete.\n\n\nSo I included a couple more classes to make this a complete\nexample. You shouldn't really put data into a source without\nvalidating it first you know!\nThe main point of discussion however is\nRegisterUserDataAction. It uses a validator to ensure the\ndata is good to insert. If it isn't it'll encapsulate the\nvalidator in an error and raise it. Alternatively you could\nreturn a hash of error data. If the data was fine this will\nthen get passed to #insert, another mapper method. You might\nalso send welcome emails at this point which would happen on a\nservice injected into the #exec method just like the\nmappers.\nBrief roundup\nSo that's that. A quick summary:\n\nmappers act on data sources\nmodels use mappers to get and model data\nactions use mappers to set data, usually with validation\n\nJust like the separation of concerns with T + M + TE\nwe get all the goodness of modular design.\nMappers define a source agnostic interface for communicating\nwith data sources. They should be written agnostically so you\ncan replace them with alternate mappers for other data\nsources. They should always be injected into models and\nactions so you can replace them. Mappers could potentially\ninterface with ActiveRecord, ROM or more\ndirectly with data layers such as Sequel or Mongo.\nModels aren't your ActiveRecord model. They take in mappers\nand data for querying and return a hash model. This is why the\nmethod is called #to_hash after all. Models are directly\ncoupled to the mapper methods they call. This is to be\nexpected and of course the model only exposes one public\nmethod so the mapper implementation is hidden from the\napplication controller or where ever your model is used.\nActions also aren't your ActiveRecord model. They take in\nmappers and data to insert, update or delete records in a data\nsource. They too, like models, are directly coupled with the\nmapper methods but again only have one public interface\nthemselves #exec. Ideally actions will validate the data\nbefore acting on it. They needn't be tied to the validator\nclass like the example above, of course this could have also\nbeen injected.\nFinal note\nI've been calling this the data triad. I also refer to these\ncomponents with the following equation.\n\nD = Ma + Mo + A\n\nAlso I've previously written about the view triad. Let me\nknow what you think of the D or the V that I've written about\n@LukeMorton."}},{"title":{"html":"Some thoughts on the view triad. That is templates,\nmodels and template engines.
","plain":"Some thoughts on the view triad. That is templates,\nmodels and template engines."},"slug":"views","tags":[],"publishedAt":{"pretty":"24th September 2013","iso":"2013-09-24T00:00:00.000+00:00"},"content":{"html":"Some thoughts on the view triad. That is templates,\nmodels and template engines.
\nLet's talk about templates first. Templates are the structure\nfor your data. In a web application your templates will mainly\nbe HTML or JSON focused. You might use mustache templates\nor maybe some old erb. For JSON you might use rabl.\nHere is a mustache example of a template for this blog. It's\nthe post.mustache
template used on this page in fact.
Pretty simple. The {{placeholder}}
tags are there to be\nreplaced by actual content. Content produced by a view model.\nA triple mustache tag is also used, {{{content}}}
. This is\na tag that does not escape it's HTML output into HTML\nentities. Mustache is safe by default you see. Anyways...
Where does the data used to replace these placeholders come\nfrom? Well this leads us onto a PHP class, the PostViewModel.\nView models model data for the view funnily enough.
\nAs a quick aside, I mainly blog in ruby. But I made this\nwebsite with PHP. For this article I'm using real code.\nI chose PHP – let's see what you think.
\n\n\nSo this view model transforms markdown blog post content into\nHTML and formats the created timestamp into a sring. It then\nreturns these as well as other data in the form of a hash.
\nThis data produced by #as_array
then needs to be consumed by\na template engine and mixed with the post.mustache
template.
So here we have the merging of template and view model by a\ntemplate engine. In this case it is Phly's implementation\nof mustache in PHP. We also inject in\nMichelf\\MarkdownExtra to render the post content.
\nWhat do you think? Fairly simple I think. Three things:
\nThis simple separation of concerns within the view logic of\nyour code will lead to neater applications.
\nTemplates structure your data. They know very little about\nyour view logic. Keeping them this way you'll be able to reuse\nthem more often and not have to worry when changes happen\nbehind the scenes.
\nView models handle the data behind the scenes and provide a\nsimple interface for template engines to consume the data they\nhave rendered.
\nTemplate engines know one thing only. How to mix templates and\nview models. They are obviously coupled to the type of\ntemplates you use. You can't exactly parse mustache with a\nmarkdown template engine.
\nAre you building the views to your applications like me? Let\nme know if you are or if not what you are doing instead\n@LukeMorton.
\nI've been calling this the view triad. I also abbreviate to:
\n\n\nV = T + M + TE
\n
Not so abbreviated huh? There are a couple more equations in\nmy head that I'll write about soon. Ciao ciao.
","plain":"Some thoughts on the view triad. That is templates,\nmodels and template engines.\nLet's talk about templates first. Templates are the structure\nfor your data. In a web application your templates will mainly\nbe HTML or JSON focused. You might use mustache templates\nor maybe some old erb. For JSON you might use rabl.\nHere is a mustache example of a template for this blog. It's\nthe post.mustache template used on this page in fact.\n\n\nPretty simple. The {{placeholder}} tags are there to be\nreplaced by actual content. Content produced by a view model.\nA triple mustache tag is also used, {{{content}}}. This is\na tag that does not escape it's HTML output into HTML\nentities. Mustache is safe by default you see. Anyways...\nWhere does the data used to replace these placeholders come\nfrom? Well this leads us onto a PHP class, the PostViewModel.\nView models model data for the view funnily enough.\nAs a quick aside, I mainly blog in ruby. But I made this\nwebsite with PHP. For this article I'm using real code.\nI chose PHP – let's see what you think.\n\n\nSo this view model transforms markdown blog post content into\nHTML and formats the created timestamp into a sring. It then\nreturns these as well as other data in the form of a hash.\nThis data produced by #as_array then needs to be consumed by\na template engine and mixed with the post.mustache template.\n\n\nSo here we have the merging of template and view model by a\ntemplate engine. In this case it is Phly's implementation\nof mustache in PHP. We also inject in\nMichelf\\MarkdownExtra to render the post content.\nBrief roundup\nWhat do you think? Fairly simple I think. Three things:\n\ntemplate to structure your output\nview model to adapt data for your template\ntemplate engine to mix the template and view model\n\nThis simple separation of concerns within the view logic of\nyour code will lead to neater applications.\nTemplates structure your data. They know very little about\nyour view logic. Keeping them this way you'll be able to reuse\nthem more often and not have to worry when changes happen\nbehind the scenes.\nView models handle the data behind the scenes and provide a\nsimple interface for template engines to consume the data they\nhave rendered.\nTemplate engines know one thing only. How to mix templates and\nview models. They are obviously coupled to the type of\ntemplates you use. You can't exactly parse mustache with a\nmarkdown template engine.\nAre you building the views to your applications like me? Let\nme know if you are or if not what you are doing instead\n@LukeMorton.\nFinal note\nI've been calling this the view triad. I also abbreviate to:\n\nV = T + M + TE\n\nNot so abbreviated huh? There are a couple more equations in\nmy head that I'll write about soon. Ciao ciao."}},{"title":{"html":"This is my take on using hashes to transfer data between\nbehaviour. You might know hashes as maps or associative\narrays.
","plain":"This is my take on using hashes to transfer data between\nbehaviour. You might know hashes as maps or associative\narrays."},"slug":"hashes-for-data","tags":[],"publishedAt":{"pretty":"23rd September 2013","iso":"2013-09-23T00:00:00.000+00:00"},"content":{"html":"This is my take on using hashes to transfer data between\nbehaviour. You might know hashes as maps or associative\narrays.
\nIf you're up for separating data and behaviour as discussed in\na previous post then you'll need some kind of container\nfor your data whilst passing it around your application. If\nyou're walking away from the object zoo then you might be\nmissing the monkeys. Hopefully I can assure you the world of\nseparating data from behaviour is a more tidy and efficient\none.
\nLet's work with an example. We'll start with a data source. In\nthis case, a mapper method called #find_one_by_id
. When\ngiven an ID it returns a user.
Surprise surprise I'm returning a Hash
here. Why is this\ncool? Because it's already independent of the data layer it\njust came from. It's not a Sequel dataset, or MongoDB cursor.\nIt's a hash representing a record. Sequel and MongoDB even\nhave nice ways of exporting records as hashes. And when you're\ntesting, you can just make up hashes as you go along like my\nexample above!
So let's talk about another layer. Data modelling. That's the\ntransformative step that uses mappers and data to build up a\nmodel of data required by the application. The following\nexample uses the mapper defined above and also a user ID to\nproduce a data model.
\n\n\nWho'da thunk it? #to_hash
returns a hash! You should also\nnote it takes a hash to begin with. Versatile little things\naren't they?
Let's add another tier. A view model.
\n\n\nSo we pass in another hash to ProfileViewModel#to_hash
. This\ntime with one key, :profile_user
which in fact is the hash\nproduced by UserDataModel#to_hash
.
At this point you might have your object pirate hat on. You're\nshouting at me, "Y U NO OBJECT?!"
\nWell an object has implementation details. They have methods.\nThey can also have side effects.
\nSo then you say, "SEW DOES HASH IN REWBEE!"
\nWell yes it does. But a Hash
is a core object provided by\nthe ruby language. It doesn't belong to a library like\nActiveRecord or DataMapper. It is a common data type that has\na bunch of useful methods that everyone knows about. All you\nneed to care about are the key names.
So now you're getting smarter. You're saying, "Why can't I\nextend Hash
and add custom behaviour?"
"In ruby we can make objects quack like a Hash
."
Well yah. But you still might have side effects. Passing a\nsequel query from your data mapper into your view model to be\nexecuted at some point in the future might error at that point\nin the future. At least if you transform your data provided by\na mapper into a hash in the data model layer you can contain\ndata errors where they are relevant.
\nYou can mock out with ease too. Anywhere. If all your objects\ndo is speak to each other in hashes then they all\nautomatically speak the same language.
\nIt's true we pass mapper objects into the data model. So there\nare exceptions. However data mappers are directly related to\ndata models. Mappers are used by models to get data to be\nmodelled. In fact because mappers are called inside modellers,\ntheir many methods are never exposed to anyone but the\nmodeller. This means your application logic only knows to\npass in mapper objects to a single entry point on a data model\nobject. It's brilliant I tell you!
\nSpeak in hashes my friends. Speak with a Hash
.
This is my take on data and behaviour. The two\nintertwinning components that make our programs.
","plain":"This is my take on data and behaviour. The two\nintertwinning components that make our programs."},"slug":"data-and-behaviour","tags":[],"publishedAt":{"pretty":"22nd September 2013","iso":"2013-09-22T00:00:00.000+00:00"},"content":{"html":"This is my take on data and behaviour. The two\nintertwinning components that make our programs.
\nIn OOP madland behaviour and data are combined into a single\nentity. Yes that's right, an object. To an OOP magician this\nis a self contained unit that can be passed around a program\nbut ideally doesn't leak implementation details.
\n\n\nThe above example for instance doesn't leak how it formulates\nthe full name of a User
or how it works out their age. It\nsimply provides two methods #full_name
and #age
that can\nbe used by related logic to find these values.
Personally I see that example as leaky still. Before I go into\nthe whys I first want to explain the difference between data\nand behaviour.
\nData is data. It is information that never changes. 23
is an\ninteger. It's value is always 23
. It never changes. It is\ndata.
20+3
will also always equal 23
as long as +
always\nbehaves the same but it isn't data. It's behaviour. It does\nsomething. It calculates a result which in most normal places\nwill produce the integer 23
that as explained above is\ndata. Let's work on some definitions.
\n\nData is a value that doesn't change
\n
This works. 'Luke'
is always 'Luke'
. It doesn't suddenly\nchange to 'Bob'
.
\n\nBehaviour takes data and produces more data
\n
Based on the example of +
when given two integers produces a\nthird integer this definition fits.
So bringing this back to OOP madland some wizards choose to\nmix data and behaviour into one magical object. The User
\nclass defined above is given data on construction and produces\nan object which then can be used to work out some data that\ndidn't exist on construction.
The values 'Luke Morton'
and 23
were never passed to the\n#new
method of User
. They were produced by behaviour\ncontained within the object.
How is this a mess? Put simply it's because data is tied to\nbehaviour. And behaviour happens at a later stage to the data\nbeing passed into it. The setbacks include:
\nLet's solve each of these bug bears.
\nWe can use the same solution found in a previous post on\nthe single resposibility principle.
\n\n\nThis object only has one public method, #to_hash
. Only one\nimplementation to leak. It takes a hash of data and produces\nanother. And it produces it all at the first call time.
This also solves the issue of potential side effects. One\ncall site means you can trap potential issues all in one place\nlike a controller with a try..catch (begin..rescue).
\nGuess what? The third problem is solved too. Since data is\npassed around as a hash. No object with potential contraints\non data access (think no direct access to initial values) can\nhold us back. We can do much more with a hash as it's a common\ndata type. External libraries can consume hashes far easier\nthan bespoke objects common to fewer libraries.
\nI'll talk more in the future about the versatilities of using\nhashes to store data.
\nEdit: I've now talked more about using hashes for data.
","plain":"This is my take on data and behaviour. The two\nintertwinning components that make our programs.\nIn OOP madland behaviour and data are combined into a single\nentity. Yes that's right, an object. To an OOP magician this\nis a self contained unit that can be passed around a program\nbut ideally doesn't leak implementation details.\n\n\nThe above example for instance doesn't leak how it formulates\nthe full name of a User or how it works out their age. It\nsimply provides two methods #full_name and #age that can\nbe used by related logic to find these values.\nPersonally I see that example as leaky still. Before I go into\nthe whys I first want to explain the difference between data\nand behaviour.\nData is data. It is information that never changes. 23 is an\ninteger. It's value is always 23. It never changes. It is\ndata.\n20+3 will also always equal 23 as long as + always\nbehaves the same but it isn't data. It's behaviour. It does\nsomething. It calculates a result which in most normal places\nwill produce the integer 23 that as explained above is\ndata. Let's work on some definitions.\n\nData is a value that doesn't change\n\nThis works. 'Luke' is always 'Luke'. It doesn't suddenly\nchange to 'Bob'.\n\nBehaviour takes data and produces more data\n\nBased on the example of + when given two integers produces a\nthird integer this definition fits.\nObjects are a mess\nSo bringing this back to OOP madland some wizards choose to\nmix data and behaviour into one magical object. The User\nclass defined above is given data on construction and produces\nan object which then can be used to work out some data that\ndidn't exist on construction.\n\n\nThe values 'Luke Morton' and 23 were never passed to the\n#new method of User. They were produced by behaviour\ncontained within the object.\nHow is this a mess? Put simply it's because data is tied to\nbehaviour. And behaviour happens at a later stage to the data\nbeing passed into it. The setbacks include:\n\nDependencies on multiple method names per object. Bigger\nexposure means more potential breaking points when methods\nare renamed or behaviour altered but not updated in\nimplementation. Not to mention the added code bloat of\nmultiple call sites which also leads to debugging\ncomplexity.\nDelayed behaviour means side effects can happen at anytime\nwhen called throughout your application. Database calls\nmight error in the view part of your application when\ntriggered by a method call to a model.\nData is tied within the implementation of the model rather\nthan being a more common data type like a hash.\n\nLet's solve each of these bug bears.\nSeparating data and behaviour\nWe can use the same solution found in a previous post on\nthe single resposibility principle.\n\n\nThis object only has one public method, #to_hash. Only one\nimplementation to leak. It takes a hash of data and produces\nanother. And it produces it all at the first call time.\nThis also solves the issue of potential side effects. One\ncall site means you can trap potential issues all in one place\nlike a controller with a try..catch (begin..rescue).\nGuess what? The third problem is solved too. Since data is\npassed around as a hash. No object with potential contraints\non data access (think no direct access to initial values) can\nhold us back. We can do much more with a hash as it's a common\ndata type. External libraries can consume hashes far easier\nthan bespoke objects common to fewer libraries.\nI'll talk more in the future about the versatilities of using\nhashes to store data.\nEdit: I've now talked more about using hashes for data."}},{"title":{"html":"This is my take on the single responsibility principle and\nhow we can take it further.
","plain":"This is my take on the single responsibility principle and\nhow we can take it further."},"slug":"taking-srp-further","tags":[],"publishedAt":{"pretty":"21st September 2013","iso":"2013-09-21T00:00:00.000+00:00"},"content":{"html":"This is my take on the single responsibility principle and\nhow we can take it further.
\nThe principle tells us that:
\n\n\nEvery class should have a single responsibility, and that\nresponsibility should be entiry encapsulated by the class.
\n
Let's dig straight into an example in ruby.
\n\n\nSo here we have a class that follows SRP. It deals with the\ntransformation of data ready to be merged with a template. In\nthis case it is a blog post's view.
\nHowever I don't want to just tell you about SRP. I want to\nshow you my take. I think the above example doesn't fully\nencapsulate like SRP mandates. My personal take on single\nresponsibility comes in the form of a single entry point to\nthe behaviour of an object. That means one public function\nonly! I would personally write the above code like so:
\n\n\nYou might observe that I'm not storing any state in this\nobject. You'd be correct. This is more a functional style of\nprogramming. It unties data from behaviour. And this is the\ncrux of single responsibility for me. In order to decouple\nsystems you shouldn't be passing objects around with their\nbehaviour. Passing behaviour around in a variable in the form\nof an object will lead to more code knowing more shit.
\nWhy don't you want code knowing more shit? Because the less it\nknows the less things go wrong when the facts it knows are no\nlonger true.
\nBringing this back to SRP, behaviour should have a single\nresponsibility. To change data. Input and output. We pass a\nhash into BlogPostViewModel.new.to_hash()
and get another\nhash out. This way you can't pass around a BlogPostViewModel
\nobject and have your code know anything other than #to_hash
.\nThe objects single responsibility is #to_hash
.
Of course the two examples found within this post are not\nthe same. One is an object that has many public accessors to\nthe data it contains. Some with transformative behaviour on\nthe data that was initially passed into the object when it was\ninstatiated. The other takes a hash and produces a hash.\nHowever both can be used with mustache templates so for me\nit's not an issue.
\nI like passing around my data in hashes and behaviour in\nobjects or functions. It depends on the language and first\nclass nature of functions. Ruby you have to pass around\nobjects. In Clojure or JavaScript you can pass methods around\nbut anyhow I digress.
\nI love this principle and if you don't you will learn to love\nit in time. Following it will keep your code self contained.\nIt will lead to less dependencies.
\nI will write more on this subject but until then let me make\na quote up.
\n\n","plain":"This is my take on the single responsibility principle and\nhow we can take it further.\nThe principle tells us that:\n\nEvery class should have a single responsibility, and that\nresponsibility should be entiry encapsulated by the class.\n\nLet's dig straight into an example in ruby.\n\n\nSo here we have a class that follows SRP. It deals with the\ntransformation of data ready to be merged with a template. In\nthis case it is a blog post's view.\nHowever I don't want to just tell you about SRP. I want to\nshow you my take. I think the above example doesn't fully\nencapsulate like SRP mandates. My personal take on single\nresponsibility comes in the form of a single entry point to\nthe behaviour of an object. That means one public function\nonly! I would personally write the above code like so:\n\n\nYou might observe that I'm not storing any state in this\nobject. You'd be correct. This is more a functional style of\nprogramming. It unties data from behaviour. And this is the\ncrux of single responsibility for me. In order to decouple\nsystems you shouldn't be passing objects around with their\nbehaviour. Passing behaviour around in a variable in the form\nof an object will lead to more code knowing more shit.\nWhy don't you want code knowing more shit? Because the less it\nknows the less things go wrong when the facts it knows are no\nlonger true.\nBringing this back to SRP, behaviour should have a single\nresponsibility. To change data. Input and output. We pass a\nhash into BlogPostViewModel.new.to_hash() and get another\nhash out. This way you can't pass around a BlogPostViewModel\nobject and have your code know anything other than #to_hash.\nThe objects single responsibility is #to_hash.\nOf course the two examples found within this post are not\nthe same. One is an object that has many public accessors to\nthe data it contains. Some with transformative behaviour on\nthe data that was initially passed into the object when it was\ninstatiated. The other takes a hash and produces a hash.\nHowever both can be used with mustache templates so for me\nit's not an issue.\nI like passing around my data in hashes and behaviour in\nobjects or functions. It depends on the language and first\nclass nature of functions. Ruby you have to pass around\nobjects. In Clojure or JavaScript you can pass methods around\nbut anyhow I digress.\nI love this principle and if you don't you will learn to love\nit in time. Following it will keep your code self contained.\nIt will lead to less dependencies.\nI will write more on this subject but until then let me make\na quote up.\n\nThe less bytes know about other bytes the better\n"}}]},"__N_SSG":true}The less bytes know about other bytes the better
\n