Roadmap review and retrospective – 2015: Hello WordPress

 2015: Hello WordPress

Introduction to WordPress


In the end of 2015 I decided to create this blog with the purpose of documenting and sharing the acquired knowledge during my professional and personal journal. Writing what I learn is a good habit and helps me to structure my knowledge. Plus, is good to share with more people and even better get some feedback and extend that knowledge.

The name “The privilege of making the wrong choice” was inspired by a music band called Zen from Porto city in the 90s. It represents the privilege of having time and space to experiment, to make mistakes, to learn from failure and success in the working place. Fortunately in Celfinet, my company at that time, I had that privilege.

My first post was about my experience as organization team member of the event SQLSaturday Porto 2015. After that I recovered one old post about a Arduino project and wrote a post about the first experience as speaker in Porto.Data community.

I was able to make my WordPress initiation: create posts and pages, add content including media, manage plugins, appearance and all the different aspects related to a WordPress website.

DevOps Porto as TugaIT 2017 participating community

TugaIT is one the the greatest IT conferences that happens in Portugal. The TugaIT 2017 edition will happen between 18 and 20 of May in Lisbon (Microsoft HQ).

This 3 day conference includes workshops and talks and the following tracks: Azure Infrastructure, Agile & DevOps, Integration, Microsoft Data Platform, Office 365, Open Source Data Platform, PowerShell, Professional Development, Programming, SharePoint.

As a member of DevOps Porto I’m proud to represent one of the participating communities of TugaIT 2017. As a member of this community we hope to contribute to a high quality conference.

If you would like to be a speaker I would like to invite you to submit your talk (or talks): Call For Speakers.

Hope to see you all at the conference, as a speaker or as attendee.

DevOps Porto

In the beginning of 2016 I started to build an idea with Miguel Alho about creating a DevOps group where developers, operations, coaches, managers and everyone else involved in the software development pipeline could share opinions, discuss different points of view and, most important,  learn from each other.

While we shared the same company (Miguel as developer, me as operations), we saw how important communication, negotiation and learning between development and operations was in order to relieve the delivery pain. We specially enjoyed the discussion process and the collaborative learning process. So we thought: why not to extend this to more people?

During a conference in June at Porto city I was able to meet Manuel Pais, DevOps Lisbon founder, and shared with him our intention of creating a DevOps group in Porto. He liked and encouraged us to materialize our idea. Filipe Correia, was part of that conference organization team, also liked the idea and got interested being a part of this DevOps adventure.

In July 2016 we launched the DevOps Porto group and added to the Filipe Correia to the team.

DevOps Porto – Who are we?

IT professionals from  the North of Portugal (especially Porto city) interested in the values, culture and practices related to DevOps.

DevOps Porto – Our mission

To build bridges between development and operations, communities, companies, PEOPLE.

DevOps Porto – Our Goals

  • Create a community around DevOps movement
  • Promote discussion around DevOps practices
  • Promote the sharing of DevOps related knowledge

DevOps Porto – Where you can find us?

We had our first meetp in October 2016 together with Agile Connect (our first bridge with a community) and our second meetup in January 2017 with Mindera (our first bridge with a company). Our plan for 2017 is to organize a meetup every two months, with the next one in March.

Meanwhile, the team has grown with Miguel David, Elisete Cruz and Cesar Rodrigues joining us. It’s a great team. Everyone is welcome to our meetups and to our team. Just search us on the meetup website or talk with us on slack. Join the team and help us to build bridges around DevOps.

Flyway command-line easy setup

One of the reasons for adopting Flyway command-line was the easy setup process (no need to install). First, I will show how to setup Flyway for a single database, i.e., as if have only one database in your server, and than the setup for multiple databases in a server.

Single database

You can download Flyway command-line here.

After download and extract you have the following folders/files structure:

flyway_sctructure

In this case it’s only necessary to work the “flyway.conf” file (you can find it in conf folder):

  1. Set url to target server/database
  2. Set user and password to target server/database

Alternatively you don´t need to set user and password on the configuration file, it can be provided as argument.

Multiple databases

Because numbers matter! If you have two or more databases in your server, a option is to apply the single database setup for every database. This approach will lead to an unnecessary files multiplication. As alternative we can create a centralized folder dedicated to the Flyway application.

  1. Extract Flyway for a folder and change the name of the folder for flyway;
  2. Create a folder each one of your databases (MyDatabase1, MyDatabase2, …);
  3. Create the folder flyway_conf and copy the file flyway.conf inside;
    1. For flyway.conf file set url, user and password for the target database;
    2. Uncomment the locations configuration and set his value with “filesystem:.” (flyway.locations=filesystem:.). This means that Flyway will scan recursively for migrations the folder that contains the flyway.cmd (MyDatabase1 for example);

  1. Create the file flyway.cmd with the following code inside:

  1. After this you can execute the flyway command from each database.

And that’s it! You are ready for manage the changes of multiple databases.

Multiple servers

If you have multiple servers like: dev, test, staging, … the solution is to create a flyway.cmd and flyway.conf for each server/environment.

  1. Inside flyway/flyway_conf folder create a flyway_servername.conf file for each server (flyway_dev.conf, flyway_test.conf) and set the url, user and password for the target server and database;
  2. Create a flyway_servername.cmd file for each server (flyway_dev.cmd, flyway_test.cmd). Muke sure that you are using the correct configuration file;
    1. Inside of each cmd file should be the following code:

You just have to execute the flyway command for the intended server/environment.

User and Password

Because we want to source control all migrations and Flyway related files, saving the user name and the password in plain text in the configuration file it’s not such a good idea. So, where’s a solution:

  1. Remove your user name and password from your configuration file;
  2. Create the folder “C:\Program Files (x86)\flyway” and inside this folder create the file flyway_ep_dev.cmd
  3. The file flyway_ep_dev.cmd should contain the following code:

  1. Add the folder “C:\Program Files (x86)\flyway” to you PATH.

This way you just have removed your credentials from you source control. You should run the new cmd file from your target database.

#c9d9 Continuous Discussions – Open Source and DevOps

slide1During my vacations in August I participated on a Continuous Discussions (#c9d9) episode about Open Source and DevOps.

Continuous Discussions is a community initiative powered by Electric Cloud, and consists of a series of community panels about Agile, Continuous Delivery and Devops.

This formate, a discussion panel that debates different perspectives about a specific topic, surprised me with the interaction and fun that was to make part of this event. So, well done #c9d9, I really enjoyed.

Following, some insights from my contribution to the panel:

Open source – free as in beer or free as in puppy?

You always have a cost, independently if it’s open source ou closed source. The cost depends on the size of your team, the complexity of your tasks and the frequency of change. The good thing about open source is that you can contribute to the change and take it in your direction. That’s what I like the most in open source.

Where do you use open source?

Open source tools are used in some points of the development pipeline, like for example delivering changes to databases. If you are a startup company there is a high probability to use a lot of open source tools, and with the evolution and the complexity of your team/organization you will probably start to migrate to commercial tools. The rule is “try before you buy it”. You can combine open source and commercial tools like Jenkins, Team City, TFS Build, Octopus Deploy, etc. Even Microsoft is becoming more open.

Where would you not use an open source tool?

In big and complex systems. An important question is, what’s the common factor when you migrate to open source tool,  ot to an closed source tool, in both directions? In my opinion it’s the size aligned with complexity. So, use the right tool for the right job, open source or commercial tools.

Quality concerns?

First, if the open source tool role it’s to support your development pipeline, you have more flexibility to manage the exposure to errors. But, if the open source tool makes part of your product you have to take responsability for that integration, you have to assure that quality is there. Quality must be present everytime and everywhere, however if we are talking at develpoment pipeline level I have more flexibility, but if we are talking at costumer level I have to be more careful.

Security concerns?

Open source tools security concerns make part of the general security concerns. I try to make the development environment as closed as possible.

Legal concerns?

Both, open source and closed source has law concerns. Sometimes the decision is made at a higher level and you can not do anything, other times you can influence or even make the decision, at that time is better the read the license (you probably thinking “who does that?”).

You can see full episode here!

PortoData 28 July 2016 – Delivering changes for applications and databases

Last July at PortoData event I made my first co-presentation with my friend Miguel Alho (@MytyMyky) and the explored topic was the relationship between databases and applications in the development process.

apps_dbs

After doing some presentations about database developing process and DevOps, this presentation was the “missing link” that allows the audience to see the “big picture”.

The presentation chosen title was “Delivering changes for applications and databases” and its content is the result of the shared experience by me and Miguel at Celfinet. The challenges of the interaction/dependency between applications and databases were the main topic. We also explored the tools and process that helped us to overcome that challenges.

A good communication protocol between development and database/operations and automation, a lot of automation in the process definition, were the key factors to achieve our goal: a development pipeline which included source control, continuous integration and continuous delivery.

I presented my perspective of  database development while Miguel presented the application development perspective, we represented the common division between applications and operations. The audience reaction and questions about the way Miguel and I established a communication protocol and a development process that included databases and applications was very interesting. At the end of the presentation we were happy with the audience feedback, an experience to repeat I say!

Here are the slides from our presentation:


Scrum Portugal and the DevOps challenge

operations“How about the operations?” This was a recurring question in the conversations that I had with Nuno Rafael (@nrgomes). After several discussions about agile methodologies and his effects and challenges on the operations world I accepted his challenge to deliver a session at Scrum Portugal community about DevOps.

I decided to give my session, made at June 29, the following title: “DbOps, DevOps and Ops” because my first contact with operations was at database level (as DBA), then I progressed to infrastructure (as infrastructure team member) where I had to deal not only with infrastructure operations but also with operations related to applications.

So, my session tells my operations journey through databases, applications and infrastructure domains where the goal is to deliver software as fast as possible while achieving the balance between business goals and business deliverables. In other words, it’s my story about engineering practices within agile, using Scrum, Kanban, source control, database automation, continuous integration and continuous delivery.

The audience reaction to the presentation was quite good. I had questions made by operations people, development people and “agile” people, which made me satisfied. The session ended with a open discussion forum where I and the attendees had the chance to explore the covered topics with more detail.

I would like to thank all the Scrum Portugal team, first for the invitation and second for the way you welcome me. I was a very well organized event.

Here’s my session slides:


TugaIT 2016 – Road to database automation

tugaitThe TugaIT 2016, May 21, was so far the biggest event that I participated was a speaker. The logistic challenge was characterized by 9 tracks, each track with 6 sessions, making a total of 64 sessions. At the end of the event two combined words remain in my mind: monstrously amazing.

My participation on the TugaIT event started on the day before, Friday May 20, with the workshop “Deep walkthrough of some of the most popular/innovative features in SQL server storage engine” by Sunil Agarwal (@S_u_n_e_e_l). In addition to get to know very interesting features in SQL server 2016 edition, Sunil Agarwal can explain how they work in a very simple and easy way to understand. In fact when you listen him talking everything in SQL server seems easy and simple. The day ended with the speaker’s dinner where I had the opportunity to know and socialize with other speakers.

Next day, Saturday May 21, I made my session “Road to database automation”. This session addressed the challenges of the first step of the database automation process: database source control. Despite being part of the last sessions of the day, I had a good assistance and very interactive. I was glad to know the more people is doing database source control.


PortoData 20 April 2016 – Database source control: Migrations vs State

My second presentation at Porto.Data (April 20) was about the two approaches, migrations and state, for database source control. During the presentation I explored the advantages and disadvantages of each approach. For migrations approach I used the tool Flyway and for state approach I used Redgate SQL Source Control.

Besides presenting the pros and cons of each approach my goal was also to show that two approaches can be needed in different parts of the system, or at different times in the development process. Size and complexity of the databases, team’s capabilities or preferences and development processes will be factors that will influence the adoption or the variation between the two approaches.

Here’s some “sensations” collected from the audience:

  • This presentation is especially useful for those who’s starting to implement database source control and want to know the available options/approaches;
  • The meaning of “introduction changes in the database” or his effects/implications are not very familiar concepts to the audience;
  • The management/articulation of changes between databases and applications is not a clear process or necessity for the audience. There remains different/separated views for databases and applications (next presentation/challenge: show how to deploy an application and a database together). 


  

 

 

Flyway: “Hello database migrations”

flyway-logo

Flyway is a open source database migration tool that allows you to manage database changes using migrations. Last week the version 4.0.1 has been released and I decided to write my first post about Flyway.

I started to use Flyway command-line almost 3 years ago (version 2.2.1) and the main reason that took me to use and keep using nowadays is: it’s simplicity, “database migrations made easy”. This key factor is translated into the following:

  • Zero dependencies (you need java and your jdbc driver)
    • You can download the version that already includes both: java and the driver;
    • This is a key factor for the easy setup process;
  • Easy to setup, no need to install
    • You just have to configure Flyway: target server, migrations location, etc;
    • This makes the deploy process extremely easy;
  • The scripts are written in SQL
    • You, or your team, do not have to learn or use a different language to create migrations.

Flyway commands

Flyway provides 6 basic commands:

  • migrate
    • Apply all migrations until the latest version, or until a specific target version;
    • If the metadata table doesn’t exist this command will create it automatically;
    • This is they key command of the Flyway workflow;
  • clean
    • Drops all database objects in the configured schemas;
    • This command should be used with caution, especially in system databases or in production environment;
    • Its useful in development and test environments enabling a fresh start  cleaning completely your database;
  • info
    • Gives the current status information about all the migrations;
    • This is done by checking the migrations scripts against the metadata table;
    •  Allows you to know if a migration was applied with success, or still pending or was ignored;
  • validate
    • Validates applied migrations against the available migration scripts on your folder;
    • Allows to validate if all migrations were applied, i.e. do not exist pending migrations;
    • Allows to validate if a migration script was change after being successfully applied;
      • This validation is made trough checksum validation;
      • Allows to reliably recreate the database schema;
  • baseline
    • Allows to set the baseline of an existing database;
    • All migrations upto and including the baseline version will me ignored;
    • If the metadata table doesn’t exist this command will create it automatically;
  • repair
    • This command repairs the metadata table;
    • Remove migrations from the table marked as failed;
    • Realign the checksums of the applied migrations to the ones of the available migrations

flyway_commands

Flyway configuration

The Flyway configuration can be specified in two ways:

  1. In the configuration file: flyway.conf
  2. In the command-line using the format “-key=value” (this way overrides the previous one)

The following picture shows all available configuration options for Flyway version 3.2.1:

flyway_options

Flyway metadata table

The metadata table is used to track the state of the database. Allows to know which migrations have already been applied, when were applied and by whom. Additionally also tracks migration checksums.

flyway_metadata_table

The default name of the metadata table is “schema_version”. If the database is empty and the metadata table do not exist, Flyway will create it automatically.

Flyway scans the migrations directory and check migrations against the metadata table. Migrations are sorted based on their version number and applied in order.

Flyway command-line structure

When you download and extract Flyway command-line you will find the following structure:

flyway_strucuture

  • conf
    • In the folder you will find the configuration file “flyway.conf”
  • drivers
    • This folder contains the jdbc drivers
  • jars
    • In this folder you can add java migrations
  • lib
    • This folder contains Flyway jar files
  • sql
    • In this folder you can add SQL migrations
  • flyway.cmd
    • File responsible for executing Flyway (Windows command script)