Earth science data processing tasks present many challenges. These tasks often process large input datasets and require scores of CPU-hours to generate results. All but the simplest tasks will be decomposed into a series of computational or data manipulation steps, also known as a scientific workflow. In order to reduce the burden of orchestrating and running the dependent processing steps, a workflow execution engine is required. This poster describes the lessons learned by the CLARREO Pathfinder (CPF) team while developing multiple scientific workflows and utilizing the open-source Nextflow engine to execute them in a cloud computing environment. The Nextflow engine is designed with the following stated goals: first, the engine does not dictate how individual steps in the task are implemented (i.e. it is language and interface agnostic); second, the engine supports easy configuration and modularity at the workflow level so that others can easily execute our workflows to reproduce results; lastly, the engine eases development by transparently scaling execution from local to remote environments. Nextflow was developed for the bioinformatics domain but is a good fit for other scientific workflows where the overall task is well-described by a dataflow diagram. The CPF team has developed Nextflow pipelines (i.e. scientific workflows) to simulate CLARREO radiance, generate large look-up tables for inter-calibration algorithms, and generate L4 intercalibration data products. These pipelines consume from single-digits to hundreds of thousands of CPU-hours. In the development and evolution of these pipelines we have discovered many design patterns, pitfalls, and solutions to common problems. Our goal is to demonstrate important aspects of how to design, implement, run, and ultimately share Nextflow pipelines in the domain of Earth science.