Automated Testing Layers and Execution Configuration

One of the most beneficial things you can do when developing production quality applications is apply automated testing in a few different layers. In my most recent projects we’ve successfully built and deployed applications with no dedicated QA and minimal amounts of manual testing. First lets take a look at our layers.

  1. Unit Tests
  2. Spring Database/MVC Integration tests
  3. Pre-deployment Selenium Integration Tests
  4. Post-deployment Selenium Integration Tests

and then how we need to configure our tests for execution during

  1. development
  2. build

Unit Tests

Unit testing give us a few advantages. To put it bluntly, it forces you to design and code loosely coupled objects like we’ve always been taught, but not always practiced. This isn’t to say that other code isn’t testable, but it becomes a huge nightmare when you are testing code that does 100 different things.

Next, Unit Testing also documents our code. By writing a unit test I am telling other developers who end up working on the code “this is how it should work in these situations” and “these are my assumptions for this code”. Unit test combined with verbose self documenting code helps eliminate our need for large amounts of javadocs and other written documentations (not to say you should write no documentation, but you should understand when and where it is necessary).

The main thing to understand here is that these test do not determine if your system actually works. They just make sure the assumptions of the initial developer hold true for a given unit (class in this case).

Let’s take a look at an example unit test where we isolate our class under test using mockito to mock out its dependencies.

@RunWith(MockitoJUnitRunner.class)
public class ManifestServiceImplTest {
	private static final long FILEID = -1L;
	@InjectMocks
	private ManifestService manifestServiceImpl = new ManifestServiceImpl();
	@Mock
	private UserService userService;
	@Mock
	private MidService midService;
	@Mock
	private ManifestRepository manifestRepository;
	private Manifest manifest;
	private User user;
	private final String username = "abc";
	@Captor
	private ArgumentCaptor<Pageable> manifestPageCaptor;

	@Before
	public void setup() {
		user = new User();
		user.setUsername(username);
		when(userService.find(username)).thenReturn(user);
		manifest = new Manifest();
		when(manifestRepository.findOne(FILEID)).thenReturn(manifest);
		when(manifestRepository.save(manifest)).thenReturn(manifest);
	}

	@Test
	public void getAvailableEnvNone() {
		when(midService.hasCompletedMidCertificationStatus(username))
				.thenReturn(false);
		when(midService.hasIncompletedMidCertificationStatus(username))
				.thenReturn(false);
		assertTrue("no manifestEnvs should be returned if user has no mid",
				manifestServiceImpl.getAvailableManifestEnvs(username)
						.isEmpty());
	}

	@Test
	public void getAvailableEnvCompleteOnly() {
		when(midService.hasCompletedMidCertificationStatus(username))
				.thenReturn(true);
		when(midService.hasIncompletedMidCertificationStatus(username))
				.thenReturn(false);
		Set<ManifestEnv> manifestEnvs = manifestServiceImpl
				.getAvailableManifestEnvs(username);
		assertEquals(
				"manifestEnvs should have 2 entries when user only has completed mid cert",
				2, manifestEnvs.size());
		assertTrue("manifestEnvs should contain all ManifestEnv enums",
				manifestEnvs.containsAll(Arrays.asList(ManifestEnv.values())));
	}

	@

	Test
	public void getAvailableEnvIncompletOnly() {
		when(midService.hasCompletedMidCertificationStatus(username))
				.thenReturn(false);
		when(midService.hasIncompletedMidCertificationStatus(username))
				.thenReturn(true);
		Set<ManifestEnv> manifestEnvs = manifestServiceImpl
				.getAvailableManifestEnvs(username);
		assertEquals(
				"manifestEnvs hsould only have 1 entry when user has only incomplete mid cert",
				1, manifestEnvs.size());
		assertTrue("mainfestEnvs should only contain TEM",
				manifestEnvs.contains(ManifestEnv.TEM));
	}

	@Test
	public void getAvailableEnvBoth() {
		when(midService.hasCompletedMidCertificationStatus(username))
				.thenReturn(true);
		when(midService.hasIncompletedMidCertificationStatus(username))
				.thenReturn(true);
		Set<ManifestEnv> manifestEnvs = manifestServiceImpl
				.getAvailableManifestEnvs(username);
		assertEquals(
				"manifestEnvs should have 2 entries when user only has completed mid cert",
				2, manifestEnvs.size());
		assertTrue("manifestEnvs should contain all ManifestEnv enums",
				manifestEnvs.containsAll(Arrays.asList(ManifestEnv.values())));
	}

	@Test
	public void find() {
		when(manifestRepository.findOne(FILEID)).thenReturn(manifest);
		final Manifest returnedManifest = manifestServiceImpl.find(FILEID);
		verify(manifestRepository).findOne(FILEID);
		assertEquals("manifest should be returned when found by FILEID",
				manifest, returnedManifest);
	}

	@Test
	public void findNotFound() {
		when(manifestRepository.findOne(FILEID)).thenReturn(null);
		final Manifest returnedManifest = manifestServiceImpl.find(FILEID);
		verify(manifestRepository).findOne(FILEID);
		assertEquals(
				"null should be returned when a manifest file is not found",
				null, returnedManifest);
	}

	@Test
	public void findUserManifestHistory() {
		final Page<Manifest> page = new PageImpl<Manifest>(
				Lists.newArrayList(manifest));
		when(
				manifestRepository.findByUserUsernameOrderByCreatedTimeDesc(
						eq("abc"), isA(Pageable.class))).thenReturn(page);
		manifestServiceImpl.findUserManifestHistory("abc");
		verify(manifestRepository).findByUserUsernameOrderByCreatedTimeDesc(
				eq("abc"), manifestPageCaptor.capture());
		assertEquals("user manifest histroy should be max 7 for page size", 7,
				manifestPageCaptor.getValue().getPageSize());
		assertEquals(
				"user manifest histroy should always return the first page", 0,
				manifestPageCaptor.getValue().getPageNumber());
	}

	@Test
	public void create() {
		final Manifest returnedManifest = manifestServiceImpl.create(manifest,
				username);
		assertEquals("user should be set to manifest when creating", user,
				returnedManifest.getUser());
		verify(manifestRepository).save(manifest);
	}

}

and our class under test

@Service
public class ManifestServiceImpl implements ManifestService {

	@Autowired
	private ManifestRepository manifestRepository;

	@Autowired
	private UserService userService;

	@Autowired
	private MidService midService;

	@Override
	public Manifest create(final Manifest manifest, final String username) {
		Validate.notNull(manifest);
		Validate.notBlank(username);
		final User user = userService.find(username);
		Validate.notNull(user);
		manifest.setUser(user);
		return manifestRepository.save(manifest);
	}

	@Override
	public Set<ManifestEnv> getAvailableManifestEnvs(final String username) {
		Validate.notBlank(username);
		final Set<ManifestEnv> envs = Sets.newHashSet();
		if (midService.hasCompletedMidCertificationStatus(username)) {
			envs.add(ManifestEnv.PROD);
			envs.add(ManifestEnv.TEM);
			return envs;
		}
		if (midService.hasIncompletedMidCertificationStatus(username)) {
			envs.add(ManifestEnv.TEM);
		}
		return envs;
	}

	@Override
	@PostAuthorize("returnObject == null or returnObject.user.username == principal.username")
	public Manifest find(final long id) {
		return manifestRepository.findOne(id);
	}

	@Override
	public Page<Manifest> findUserManifestHistory(final String username) {
		Validate.notBlank(username);
		final Pageable pageable = new PageRequest(0, 7);
		return manifestRepository.findByUserUsernameOrderByCreatedTimeDesc(
				username, pageable);
	}
}

I mainly want to look at the find method so we can demonstrate what unit tests do and do not do for us.

The find method does nothing except delegate to a spring data repository interface. So we have nothing really to test except that the method got called. But even so we actually have two test methods for this method. The question is why? We can’t test much functionality here since our method isn’t doing much on its own, and code coverage would be 100% with only a single test method verifying the mocked out object had its findOne method called. But we have a found and notFound test, and this is because our interface can return a null or non null object. So here we aren’t really testing anything, but we are documenting that our interface will return null if nothing is found by the repository.

That being said, lets move on to some test that actually test our system (paritally) as a whole.

Spring Database and MVC Integration Tests

This is our first layer of integration tests. We use the spring-test framework for building these tests that focus on our spring container and down. These test spin up their own spring context and are not deployed to a container. Our test also have full access to our spring context so we can inject beans into our test class.

First lets look at a spring database integration test for our MainfestServiceImpl class to compare it with the unit tests we created.

@FlywayTest
@DBUnitSupport(loadFilesForRun = { "CLEAN_INSERT", "/dbunit/dbunit.base.xml",
		"CLEAN_INSERT", "/dbunit/dbunit.dev.base.xml", "CLEAN_INSERT",
		"/dbunit/dbunit.manifest.xml", "CLEAN_INSERT", "/dbunit/dbunit.mid.xml" })
public class ManifestServiceImplSpringDatabaseITest extends
		AbstractSpringDatabaseTest {

	@Autowired
	private ManifestService manifestService;

	@Autowired
	private UserRepository userRepository;
	private User user;

	@Before
	public void setup() {
		user = userRepository.findOne(-1L);
	}

	@Test
	@DBUnitSupport(loadFilesForRun = { "CLEAN_INSERT",
			"/dbunit/dbunit.mid.cert.incomplete.xml" })
	public void getAvailableManifestEnvsTEMOnly() {
		Set<ManifestEnv> manifestEnvs = manifestService
				.getAvailableManifestEnvs(user.getUsername());
		assertEquals(
				"only one ManifestEnv should be returned when user only has incomplete mid",
				1, manifestEnvs.size());
		assertTrue("returned manifestEnvs should contain TEM",
				manifestEnvs.contains(ManifestEnv.TEM));
	}

	@Test
	@DBUnitSupport(loadFilesForRun = { "CLEAN_INSERT",
			"/dbunit/dbunit.mid.cert.complete.xml" })
	public void getAvailableManifestEnvsTEMAndPROD() {
		Set<ManifestEnv> manifestEnvs = manifestService
				.getAvailableManifestEnvs(user.getUsername());
		assertEquals(
				"only TEM and PROD should be returned when user only has complete mid",
				2, manifestEnvs.size());
		assertTrue("returned manifestEnvs should contain TEM and PROD",
				manifestEnvs.containsAll(Arrays.asList(ManifestEnv.values())));
	}

	@Test
	public void findFound() {
		assertNotNull(manifestService.find(-1L));
	}

	@Test
	public void findNotFound() {
		assertNull("null should be returned when file not found",
				manifestService.find(-10L));
	}

	@Test
	@DBUnitSupport(loadFilesForRun = { "CLEAN_INSERT",
			"/dbunit/dbunit.manifest.xml" })
	public void findUserManifestHistory() {
		assertEquals("user should have 2 manifest in their history", 2,
				manifestService.findUserManifestHistory(user.getUsername())
						.getNumberOfElements());
	}

	@Test
	public void create() throws IOException {
		final Manifest manifest = new Manifest();
		manifest.setEnvironment(ManifestEnv.PROD);
		byte[] data = "hello".getBytes();
		final MultipartFile multipartFile = new MockMultipartFile("somefile",
				"file.txt", null, new ByteArrayInputStream(data));

		manifest.setMultipartFile(multipartFile);
		final Manifest returnManifest = manifestService.create(manifest,
				user.getUsername());
		assertTrue(returnManifest.getFileSystemResource().exists());
		assertNotNull("id should be set for saved manifest", manifest.getId());
		assertNotNull("createdTime should be set on manifest",
				manifest.getCreatedTime());
		assertNotNull("path should be set on manifest", manifest.getPath());
		assertNotNull("filename should be set on manifest",
				manifest.getFilename());
		assertEquals("file should be saved when manifest is saved",
				data.length, IOUtils.toByteArray(manifest
						.getFileSystemResource().getInputStream()).length);
	}
}

Again, similar test methods, but this time our intentions are different. For integration test we are now actually testing the application as it would be run in a live environment. We don’t mock anything here, but we do need to setup test data in our database so that our tests are reproducible.

Let’s look at the create method this time. Since our object is a hibernate entity, we expect hibernate to perform some operations for us. In this case, our entity has a prePersist method that writes a file to the file system before saving our entity to the database. That method also sets up the state of our entity by storing the path to the file, its original filename, time it was created, and hibernate assigns an id.

The @Flyway annotation handles our database lifecycle. It can be placed on the method/class level and will clean and rebuild the database. This combined with the @DBUnitSupport annotation lets us fully control the state of our database for each test. See https://github.com/flyway/flyway-test-extensions for more information.

That being said, lets take a look at the AbstractSpringDatabaseTest class that we extend so we can see how everything is configured for these tests.

@RunWith(SpringJUnit4ClassRunner.class)
@Category(IntegrationTest.class)
@ContextConfiguration(classes = { SpringTestConfig.class })
@ActiveProfiles({ "DEV_SERVICES", "TEST_DB", "DEV_SECURITY" })
@TestExecutionListeners({ DependencyInjectionTestExecutionListener.class,
		FlywayDBUnitTestExecutionListener.class,
		TransactionalTestExecutionListener.class })
@TransactionConfiguration(defaultRollback = false)
@Transactional("transactionManager")
public class AbstractSpringDatabaseTest {

}

So few things here. First is the @RunWith and @ContextConfiguration annotations setup our spring context and @ActiveProfiles sets the spring profiles we want to use while running these tests. The @TestExecutionListeners lets us register listeners that spring-test provides hooks for in our tests. DependencyInjectionTestExecutionListener allows us to inject beans directly into our test, FlywayDBUnitTestExecutionListener handles @Flyway and @DbUnitSupport annotations and TransactionalTestExecutionListener makes our tests transnational so hibernate has transactions to work within. Next for transactional support we have @TransactionConfiguration which allows us to configure our transactions and @Transactional(“transactionManager”) which actually wraps our test methods in transactions (you most likely have seen this annotation when writing transnational code).

Next we need to take a look at the SpringTestConfig class

@Configuration
@Import({ DatabaseTestConfig.class })
@ComponentScan(value = "com.company.artifact.app")
@PropertySource("classpath:test.properties")
public class SpringTestConfig {
	@Bean
	public static PropertySourcesPlaceholderConfigurer propertyPlaceholder() {
		final PropertySourcesPlaceholderConfigurer placeholder = new PropertySourcesPlaceholderConfigurer();
		placeholder.setIgnoreUnresolvablePlaceholders(true);
		return placeholder;
}

Again, this class isn’t doing too much. It tells spring to scan our base class for beans and imports another configuration class. It also imports some properties for our test.

@Configuration
@Profile("TEST_DB")
@PropertySource({ "classpath:flyway.properties" })
public class DatabaseTestConfig {

	@Value("${flyway.user}")
	private String user;
	@Value("${flyway.password}")
	private String password;
	@Value("${flyway.url}")
	private String url;
	@Value("${flyway.locations}")
	private String locations;
	@Value("${flyway.placeholdersPrefix}")
	private String prefix;
	@Value("${flyway.placeholderSuffix}")
	private String suffix;

	@Bean(destroyMethod = "close")
	public DataSource dataSource() {
		final BasicDataSource basicDataSource = new BasicDataSource();
		basicDataSource.setUsername(user);
		basicDataSource.setPassword(password);
		basicDataSource.setUrl(url);
		basicDataSource.setMaxActive(-1);
		return basicDataSource;
	}

	@Bean
	public FlywayHelperFactory flywayHelperFactory() {
		final FlywayHelperFactory factory = new FlywayHelperFactory();
		final Properties flywayProperites = new Properties();
		flywayProperites.setProperty("flyway.user", user);
		flywayProperites.setProperty("flyway.password", password);
		flywayProperites.setProperty("flyway.url", url);
		flywayProperites.setProperty("flyway.locations", locations);
		flywayProperites.setProperty("flyway.placeholderPrefix", prefix);
		flywayProperites.setProperty("flyway.placeholderSuffix", suffix);
		factory.setFlywayProperties(flywayProperites);
		return factory;
	}

	@Bean
	public Flyway flyway() {
		final Flyway flyway = flywayHelperFactory().createFlyway();
		flyway.setDataSource(dataSource());
		return flyway;
	}

	@Bean
	@Qualifier("userNumber")
	public String userNumber() {
		return "";
	}
}

We need to setup our own datasource for our tests since they won’t be run from within our container, and thus won’t have access to any jndi resources. Also here we configure flyway to use that datasource as well. flyway.properties is actually populated by defaults in our parent maven pom and can be overridden during a test run if needed. We’ll see later when we talk about the maven build how we use these properties to run against either oracle or h2 database.

Ignore the userNumber bean for now, we’ll get to that when we talk about pre and post deployment selenium tests.

Next lets look at how we extend these database test to support using spring-test for testing Spring MVC.

@WebAppConfiguration
public class AbstractSpringMvcTest extends AbstractSpringDatabaseTest {

  @Autowired
  protected WebApplicationContext webApplicationContext;

  @Autowired
  private FilterChainProxy springSecurityFilterChain;

  protected MockMvc mockMvc;

  @Before
  public void setup() {
    mockMvc =
        MockMvcBuilders.webAppContextSetup(webApplicationContext)
            .addFilter(springSecurityFilterChain).build();
  }

  protected UserDetailsRequestPostProcessor custregUser(final String username) {
    return SecurityRequestPostProcessors.userDeatilsService(username).userDetailsServiceBeanId(
        "custregUserDetailsService");
  }
}

Here we are extending our database test functionality and adding in some spring mvc test configuration. The code here is setting up springs mockMvc for us and setting a username and userDetailsService for our security so we don’t need to mock out our user. Also we annotate our configuration with @WebAppConfiguration so that we have a WebApplicationContext created for us.

Currently Spring MVC test does not support spring security, but there is an example at https://github.com/spring-projects/spring-test-mvc/blob/master/src/test/java/org/springframework/test/web/server/samples/context/SecurityRequestPostProcessors.java that we borrow from. This let’s use create some request post processors and add in our spring security information before tests run. This makes it so we don’t need to mock out our SecurityContextHolder wrapper bean since we’ll set an actual authentication object on it. This feature will most likely be added in a later version of spring test.

There’s not much configuration here outside what we’ve covered during our database tests so lets take a look at an example using Spring mvc test.

@FlywayTest
@DBUnitSupport(loadFilesForRun = { "CLEAN_INSERT", "/dbunit/dbunit.base.xml",
		"CLEAN_INSERT", "/dbunit/dbunit.dev.base.xml", "CLEAN_INSERT",
		"/dbunit/dbunit.mid.xml", "CLEAN_INSERT",
		"/dbunit/dbunit.mid.cert.complete.xml", "CLEAN_INSERT",
		"/dbunit/dbunit.manifest.xml", })
public class ManifestControllerSpringMvcITest extends AbstractSpringMvcTest {

	private User user;
	@Autowired
	private UserRepository userRepository;
	private MockMultipartFile file;

	@Before
	public void setupData() {
		user = userRepository.findOne(-1L);
		file = new MockMultipartFile("multipartFile", "orig", null,
				"bar".getBytes());
	}

	@Test
	public void index() throws Exception {
		mockMvc.perform(get("/manifests").with(custregUser(user.getUsername())))
				.andExpect(view().name(is("manifest/index")))
				.andExpect(
						model().attributeExists("manifestHistory",
								"manifestEnvironments"));
	}

	@Test
	public void uploadAjaxEnvironmentValidationErrors() throws Exception {

		mockMvc.perform(doFileUpload(file).accept(MediaType.APPLICATION_JSON))
				.andExpect(status().isBadRequest())
				.andExpect(
						jsonPath("$.fieldErrors[0].field", is("environment")))
				.andExpect(
						jsonPath("$.fieldErrors[0].error",
								is("This field cannot be null.")));
	}

	@Test
	public void uploadAjaxFileEmtpyValidationErrors() throws Exception {
		mockMvc.perform(
				doFileUpload(
						new MockMultipartFile("multipartFile", "orig", null,
								new byte[0]))
						.accept(MediaType.APPLICATION_JSON).param(
								"environment", "PROD"))
				.andExpect(content().contentType(MediaType.APPLICATION_JSON))
				.andExpect(status().isBadRequest())
				.andExpect(
						jsonPath("$.fieldErrors[0].field", is("multipartFile")))
				.andExpect(
						jsonPath("$.fieldErrors[0].error",
								is("Please select a valid file to upload.")));

	}

	@Test
	public void uploadAjaxSuccess() throws Exception {
		mockMvc.perform(
				doFileUpload(file).param("environment", "PROD").accept(
						MediaType.APPLICATION_JSON)).andExpect(status().isOk())
				.andExpect(content().contentType(MediaType.APPLICATION_JSON));

	}

	@Test
	public void uploadEnvironmentValidationErrors() throws Exception {

		mockMvc.perform(doFileUpload(file))
				.andExpect(status().isOk())
				.andExpect(model().hasErrors())
				.andExpect(
						model().attributeHasFieldErrors("manifest",
								"environment"));
	}

	@Test
	public void uploadEmptyFileValidationErrors() throws Exception {

		mockMvc.perform(
				doFileUpload(new MockMultipartFile("multipartFile", "orig",
						null, new byte[0])))
				.andExpect(status().isOk())
				.andExpect(model().hasErrors())
				.andExpect(
						model().attributeHasFieldErrors("manifest",
								"multipartFile"));
	}

	@Test
	public void uploadSuccess() throws Exception {
		mockMvc.perform(doFileUpload(file).param("environment", "PROD"))
				.andExpect(redirectedUrl("/manifests"))
				.andExpect(model().hasNoErrors());
	}

	private MockHttpServletRequestBuilder doFileUpload(
			final MockMultipartFile file) {
		return fileUpload("/manifests").file(file).with(
				custregUser(user.getUsername()));
	}
}

and the controller under test

@Controller
public class ManifestController {

	@Autowired
	private ManifestService manifestService;

	@Autowired
	private SecurityHolder securityHolder;

	@RequestMapping(value = "manifests", method = RequestMethod.GET)
	public String index(@ModelAttribute final Manifest manifest,
			final Model model) {
		setupManifestModel(model);
		return "manifest/index";
	}

	private void setupManifestModel(final Model model) {
		model.addAttribute("manifestHistory", manifestService
				.findUserManifestHistory(securityHolder.getName()));
		model.addAttribute("manifestEnvironments", manifestService
				.getAvailableManifestEnvs(securityHolder.getName()));
	}

	@RequestMapping(value = { "manifests", "api/manifests" }, method = RequestMethod.POST, produces = MediaType.APPLICATION_JSON_VALUE)
	public @ResponseBody
	Manifest uploadAjax(@Valid @ModelAttribute final Manifest manifest,
			final BindingResult bindingResult)
			throws MethodArgumentNotValidException {
		if (!manifestService.getAvailableManifestEnvs(securityHolder.getName())
				.contains(manifest.getEnvironment())) {
			bindingResult.rejectValue("environment", "invalid.manifest.env");
		}
		if (bindingResult.hasErrors()) {
			throw new MethodArgumentNotValidException(null, bindingResult);
		}
		return manifestService.create(manifest, securityHolder.getName());
	}

	@RequestMapping(value = "manifests", method = RequestMethod.POST)
	public String upload(@Valid @ModelAttribute final Manifest manifest,
			final BindingResult bindingResult, final Model model,
			final RedirectAttributes redirectAttributes) {
		if (!manifestService.getAvailableManifestEnvs(securityHolder.getName())
				.contains(manifest.getEnvironment())) {
			bindingResult.rejectValue("environment", "invalid.manifest.env");
		}
		if (bindingResult.hasErrors()) {
			setupManifestModel(model);
			return "manifest/index";
		}
		manifestService.create(manifest, securityHolder.getName());
		redirectAttributes.addFlashAttribute("flashMessage",
				"manifest.upload.success");
		return "redirect:/manifests";
	}

	@RequestMapping(value = { "manifests/{id}", "api/manifests/{id}" }, method = RequestMethod.GET, produces = MediaType.APPLICATION_OCTET_STREAM_VALUE)
	public @ResponseBody
	FileSystemResource download(@PathVariable final Long id,
			final HttpServletResponse httpServletResponse)
			throws FileNotFoundException, IOException {
		final Manifest manifest = manifestService.find(id);
		if (manifest == null)
			throw new NotFoundException();
		httpServletResponse.setHeader("Content-Disposition",
				"attachment; filename=" + manifest.getFilename());
		return manifest.getFileSystemResource();
	}

	@Autowired
	private MessageSource messageSource;

	@ExceptionHandler(MethodArgumentNotValidException.class)
	@ResponseStatus(value = HttpStatus.BAD_REQUEST)
	public @ResponseBody
	HttpValidationMessage validation(final MethodArgumentNotValidException e,
			final Locale locale) {
		return new HttpValidationMessage(e.getBindingResult(), messageSource,
				locale);
	}
}

So what are we and aren’t we testing here? First off we are not testing the UI and servlet container features. What we are testing is our HTTP API that we’ve created in spring, along with all the services and other objects involved. You’re spring code will be executed as if it had received a real requests from the servlet container.

Spring test provides us with a nice builder pattern api for creating mock http requests and having them run through spring mvc. We can easily include things like request params, content type, and more. Then spring give us access to the http response, such as content type and response codes as well as other headers, along with any spring mvc features like model and views.

Decoupling External Systems

Before we get into selenium testing, we need to talk about our application profiles. We rely on external systems for all our user/company data and security and sometimes we even create/modify data in other systems when necessary which would need to be reset. To allow for easily reproducible and fast selenium tests we need to decouple our system from these other systems. To achieve this we used spring profiles to provide database implementations of our api calls. So instead of a class using springs restOperations to make a http call, we instead just have the interface backed by a hibernate object. These database implementations are activated by our DEV_SERVICES profile which you have seen in our test configurations. We have to do something similar with our security. Instead of using the custom filter provided by the external system we use spring security’s jdbc implementation and tie that to the DEV_SECURITY profile. With this done we can control all the data from the external systems using flyway and dbunit. We can then cover the missing api calls in post deployment selenium tests or in spring tests.

Selenium Integration Tests

Now that we can start talking about our selenium tests. The idea here is to split our test into 2 categories, Pre and Post deployment.

The purpose of pre-deployment test are to test the UI and functionality of the system in a reproducible manner so that they can be run by developers before committing code and during continuous integration builds to catch any issues before we deploy our application. If your system has a regression, or a database script error, or many of the other things that could go wrong should be caught here. We are testing the 90% of application at this point, include browser, server, and database interactions. We are not testing our external services since they are backed by the database and we are not testing any production server settings/features/etc.

Now post-deployment tests are less about verifying the application features/validations work properly and more about testing that the application deployed correctly. These test will need to setup user/company data in the external systems before they run and use the custom authentication provided by them. They’ll test the happy paths of the functionality to verify that all the external apis, database connections, etc are working properly. Also you can test web server configuration here like making sure you redirect all http to https and any other type of url rewrites/proxying/whatever that would be configured in your production env but not your developer ones.

Let’s start with our ManifestController pre-deployment selenium test

/**
 * 
 * @author stephen.garlick
 * @author lindsei.berman
 * 
 */
@DBUnitSupport(loadFilesForRun = { "CLEAN_INSERT", "/dbunit/dbunit.base.xml",
		"CLEAN_INSERT", "/dbunit/dbunit.dev.base.xml", "CLEAN_INSERT",
		"/dbunit/dbunit.mid.xml", "CLEAN_INSERT",
		"/dbunit/dbunit.mid.cert.incomplete.xml" })
@FlywayTest
public class ManifestControllerSeleniumITest extends AbstractDevSeleniumTest {

	@Value("${selenium.base.url}")
	private String baseUrl;

	@Autowired
	private ManifestPage manifestPage;

	@Autowired
	private ResourceLoader resourceLoader;

	@Autowired
	private SeleniumElementVisibilityTester seleniumElementVisibilityTester;

	@Before
	@Override
	public void setup() throws Exception {
		super.setup();
		getWebDriver().get(baseUrl + "/manifests");
	}

	@Test
	@DBUnitSupport(loadFilesForRun = { "CLEAN_INSERT",
			"/dbunit/dbunit.mid.cert.incomplete.xml" })
	public void fileSizeCannotBeZero() throws IOException {
		manifestPage
				.selectFile(getAbsoluteFilePath("classpath:files/manifest-empty-test-data"));
		assertTrue(manifestPage.isFileErrorDisplayed());
	}

	@Test
	public void successfulUpload() throws IOException {

		manifestPage
				.selectFile(
						getAbsoluteFilePath("classpath:files/manifest-not-empty-test-data"))
				.submit();
		assertTrue(manifestPage.getManifestHistorySize() >= 1);
	}

	/**
	 * When Client's certification is incomplete he/she should be able to view
	 * only the Pre-Production option in the Environment selection drop down box
	 * 
	 * @throws IOException
	 */
	@Test
	@DBUnitSupport(loadFilesForRun = { "CLEAN_INSERT",
			"/dbunit/dbunit.mid.cert.incomplete.xml" })
	public void userIncompleteCertificationOnlyViewPreProduction()
			throws IOException {
		assertEquals("Pre-Production", manifestPage.getTEMEnvironmentText());

	}

	/**
	 * When Client is Certified to upload manifests he/she should be able to
	 * view both Pre-Production and Production options in the Environment
	 * selection drop down box
	 * 
	 * @throws IOException
	 */
	@Test
	@DBUnitSupport(loadFilesForRun = { "CLEAN_INSERT",
			"/dbunit/dbunit.mid.cert.complete.xml" })
	public void userCertifiedViewBothPreProductionAndProduction()
			throws IOException {
		assertEquals("user should see both prod and preprod options", 2,
				manifestPage.getNumberOfEnvironmentOptions());
	}

	/**
	 * 
	 * @throws IOException
	 * 
	 *             when the user picks a manifest using the manifest select
	 *             button. The manifest name should be displayed beside the
	 *             cancel and upload button. Then once the cancel button is
	 *             pressed the name should no longer be displayed and the
	 *             file-select should be displayed
	 **/

	@Test
	public void manifestCancelSuccessful() throws IOException {
		int before = manifestPage.getManifestHistorySize();
		manifestPage
				.selectFile(
						getAbsoluteFilePath("classpath:files/manifest-not-empty-test-data"))
				.cancel();
		assertTrue(manifestPage.isFileSelectDisplayed());
		int after = manifestPage.getManifestHistorySize();
		assertEquals(before, after);
	}

	/**
	 * After manifest select button is pressed and a file is chosen
	 * successfully(not too small) then the upload and cancel button should be
	 * visible
	 * 
	 * @throws IOException
	 */
	@Test
	public void manifestClickandFileChoosenUploadandCancelDisplayed()
			throws IOException {
		manifestPage
				.selectFile(getAbsoluteFilePath("classpath:files/manifest-not-empty-test-data"));
		List<String> buttons = Lists.newArrayList("upload-file-button",
				"cancel-file-button");
		seleniumElementVisibilityTester.testElementsDisplayedAndEnabled(
				getWebDriver(), buttons);
	}

	private String getAbsoluteFilePath(final String resource)
			throws IOException {
		return resourceLoader.getResource(resource).getFile().getAbsolutePath();
	}
}

Again you’ll see we are controlling the database with flyway and dbunit. One thing you might realize is that we require that we have a server started up to run this test. We solve this later with maven for our builds, but for development we need to have our server up when running our tests. This is solved by Arquillian which is quickly approaching production readiness. We won’t be going into that today, but look for a future post.

If you’ve done browser work you’ll notice a lot of familiar things in the above code like css selectors. Here we are able to test that specific elements on our page are visible, enabled, and anything else you could determine from within a browser. This is because of seleniums webdriver, which interacts with an actual api for each browser directly; turn on debugging and you can see http calls for each interaction you perform within the test.

Lets go in a little deeper and start looking at our base classes.

@Category(IntegrationTest.class)
@TestExecutionListeners({DependencyInjectionTestExecutionListener.class,
    FlywayDBUnitTestExecutionListener.class})
@ActiveProfiles({"TEST_DB", "TEST_DEV"})
@ContextConfiguration(classes = {DatabaseTestConfig.class, SeleniumTestConfig.class})
public class AbstractDevSeleniumTest extends AbstractSeleniumTest {

}

Most of this should look familiar. We have a new profile TEST_DEV which will discuss in a moment. Also we see a new configuration class

@Configuration
@ComponentScan("com.company.artifact.test.selenium")
@PropertySource({ "classpath:selenium/selenium.properties" })
public class SeleniumTestConfig {

	@Bean(destroyMethod = "stop", initMethod = "start")
	public ChromeDriverService chromeDriverService() throws IOException {
		final ChromeDriverService chromeDriverService = new ChromeDriverService.Builder()
				.usingDriverExecutable(
						new File(System.getProperty("user.home")
								+ "/chromedriver")).usingAnyFreePort().build();
		return chromeDriverService;
	}

	@Bean(destroyMethod = "quit")
	public ChromeDriver chromeDriver() throws IOException {
		final ChromeDriver chromeDriver = new ChromeDriver(
				chromeDriverService());
		return chromeDriver;
	}

	/**
	 * Configuration for integration tests the run during the build process.
	 * 
	 * @author stephen.garlick
	 * 
	 */
	@Configuration
	@Profile("TEST_DEV")
	@PropertySource("classpath:selenium/selenium-build.properties")
	static class BuildSeleniumConfig {

	}

	/**
	 * Configuration for integration tests that run post deployment.
	 * 
	 * @author stephen.garlick
	 * 
	 */
	@Configuration
	@Profile("TEST_SIT")
	@PropertySource("classpath:selenium/selenium-sit.properties")
	static class SitSeleniumConfig {

	}

	@Bean
	public static PropertySourcesPlaceholderConfigurer propertyPlaceholder() {
		final PropertySourcesPlaceholderConfigurer placeholder = new PropertySourcesPlaceholderConfigurer();
		placeholder.setIgnoreUnresolvablePlaceholders(true);
		return placeholder;
	}

}

Here we setup our chromeDriverService which expects the chromedriver executable and then the chromeDriver bean itself which we’ll be using to interact with the browser. Then we component scan for our reusable selenium beans and pulling in some properties.

Next let’s take a look at our base test class

@RunWith(SpringJUnit4ClassRunner.class)
public abstract class AbstractSeleniumTest {

	@Value("${bcg.user.name}")
	private String bcgUserName;
	private String userNumber;
	private String username;

	@Autowired
	@Qualifier("userNumber")
	private Provider<String> userNumberProvider;

	@Before
	public void setup() throws Exception {
		userNumber = userNumberProvider.get();
		username = bcgUserName + userNumber;
		createUserTester.createUser(webDriver, username);
		loginTester.loginIn(webDriver, username);
	}

	@After
	public void tearDown() throws Exception {
		webDriver.manage().deleteAllCookies();
	}

	@Autowired
	private WebDriver webDriver;

	public WebDriver getWebDriver() {
		return webDriver;
	}

	@Autowired
	private SeleniumLoginTester loginTester;

	@Autowired
	private SeleniumCreateUserTester createUserTester;

}

This is where a lot of the work is going on for our tests, so lets break it down.

First the setup method. This method will be run for our pre and post deployment tests but will do different things based on the TEST_DEV or TEST_SIT profiles. If you are in a TEST_DEV (pre-deployment) test then it takes the bcgUserName property adds empty string to it and then uses that as the username for our test. Next it does a createUser call, which in this case is an empty implementation since dbunit will take care of this during database setup. Next it’ll login using our dev login page we discussed earlier. Now for the TEST_SIT profile userNumber will actually pull a number from a database sequence, which we’ll see when we look at the post deployment configuration, and creatUser will actually create a user in the external system and login does nothing because we are already logged in after creating the user.

The only thing we do after each test is clear the cookies, and thus authentication info, from the webDriver. We do this instead of instantiating a new webDriver so that we can create its bean as a singleton and reduce the amount of time we spend creating/tearing down our browser.

Next lets look at our post-deployment configuration.

@Category(SitIntegrationTest.class)
@TestExecutionListeners({DependencyInjectionTestExecutionListener.class})
@ActiveProfiles({"TEST_SIT"})
@ContextConfiguration(classes = {SitIntegrationTestConfig.class, SeleniumTestConfig.class})
public class AbstractSitSeleniumTest extends AbstractSeleniumTest {

}

Again similar to our pre-deployment except for different spring profiles and configuration classes. And this time we aren’t dealing with setting up the database so all of that configuration is gone.

Lets take a look at the new configuration class

@Configuration
@Profile("TEST_SIT")
public class SitIntegrationTestConfig {

  @Bean(destroyMethod = "close")
  public DataSource dataSource() {
    final BasicDataSource basicDataSource = new BasicDataSource();
    basicDataSource.setDriverClassName("oracle.jdbc.OracleDriver");
    basicDataSource.setUsername("user");
    basicDataSource.setPassword("password");
    basicDataSource.setUrl("url");
    basicDataSource.setMaxActive(-1);
    return basicDataSource;
  }

  @Bean
  public JdbcOperations jdbcOperations() {
    return new JdbcTemplate(dataSource());
  }

  @Bean
  @Qualifier("userNumber")
  @Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
  public String userNumber() {
    return jdbcOperations().queryForObject("select sit_user_seq.nextval from dual", String.class);
  }
}

Here we setup a connection to a database where we have a sequence object created. Before each test our base class will pull a new userNumber bean, which will return the next number form the sequence, and add it to our username so that we can create new users in the live system for our tests without needing to update the username everytime the tests are run.

Finally remember, that the setup method on the base class can be overriden to change this default creating/loging in/etc behavior in our setup() method. This can be useful when creating helper scripts that create X amount of users in the external system for manual testing and other things.

Page Pattern Support

It’s a recommend practice to use the page pattern for selenium. I won’t be going into the pattern itself, but see Page Objects for an explanation from the selenium developers.

We are going to look at a small bit of code used to support this pattern within our tests. You may have seen this object in our selenium test earlier

/**
 * 
 * @author stephen.garlick
 * @author linsei.berman
 * 
 */
@Page
public class ManifestPage {
	@FindBy(id = "upload-file-button")
	private WebElement submitButton;
	@FindBy(id = "file-select")
	private WebElement fileSelect;
	@FindBy(id = "fileinput")
	private WebElement fileUpload;
	@FindBy(id = "cancel-file-button")
	private WebElement cancelButton;
	@FindBy(id = "file-error")
	private WebElement fileError;
	@FindBys(value = { @FindBy(id = "history-body"), @FindBy(tagName = "tr") })
	private List<WebElement> manifestHistory;
	@FindBy(xpath = "//*[@id='environment']/option[1]")
	private WebElement temEnvironmentOption;
	@FindBy(xpath = "//*[@id='environment']")
	private WebElement environmentOptions;

	public boolean isFileSelectDisplayed() {
		return fileSelect.isDisplayed();
	}

	public ManifestPage selectFile(final String filePath) {
		fileUpload.sendKeys(filePath);
		return this;
	}

	public ManifestPage submit() {
		submitButton.click();
		return this;
	}

	public int getNumberOfEnvironmentOptions() {
		return new Select(environmentOptions).getOptions().size();
	}

	public ManifestPage cancel() {
		cancelButton.click();
		return this;
	}

	public boolean isFileErrorDisplayed() {
		return fileError.isDisplayed();
	}

	public int getManifestHistorySize() {
		return manifestHistory.size();
	}

	public String getTEMEnvironmentText() {
		return temEnvironmentOption.getText();
	}
}

and the Page annotation which uses the spring @Component annotation so we can register them as beans by classpath scanning

@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Component
public @interface Page {

}

and the PageBeanPostProcessor where we check for the annotation on beans created and call PageFactory.initElements to configure the bean’s selenium annotated fields with our webDriver.

@Component
public class PageBeanPostProcessor implements BeanPostProcessor {

	@Autowired
	private WebDriver webDriver;

	@Override
	public Object postProcessBeforeInitialization(Object bean, String beanName)
			throws BeansException {
		if (bean.getClass().isAnnotationPresent(Page.class)) {
			PageFactory.initElements(webDriver, bean);
		}
		return bean;
	}

	@Override
	public Object postProcessAfterInitialization(Object bean, String beanName)
			throws BeansException {
		return bean;
	}

}

Now we don’t need to worry about initializing our page beans in each test they are used and can inject them with spring.

Decoupling the Database

By default we develop against an oracle database. But this means that our test will need an oracle instance setup in advance before our test runs. To remove this need we use an in memory database called h2 which allows for oracle syntax. While h2 is fine for projects using ORMs like hibernate, it might not be the best option if you are using a lot of vendor specific features that h2 does not have compatibility for. Just remember that when making the decision to use h2 or not.

We use maven to spawn our h2 database as a tcp server so that our websphere instance and tests can both connect to it while running in different jvms. Let’s take a look at our parent pom.

	<profile>
		<id>h2-database</id>
		<activation>
			<property>
				<name>db</name>
				<value>h2</value>
			</property>
		</activation>
		<properties>
			<flyway.url>jdbc:h2:tcp://localhost:8082/mem:test;MODE=Oracle;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE</flyway.url>
			<flyway.user>sa</flyway.user>
			<flyway.password>sa</flyway.password>
			<database.datasource.class>org.h2.jdbcx.JdbcDataSource</database.datasource.class>
			<database.driver.jar>h2-1.3.175.jar</database.driver.jar>
		</properties>
	</profile>

First we have a profile that changes all the flyway connection settings to that of our h2 database.

<plugin>
	<groupId>com.btmatthews.maven.plugins.inmemdb</groupId>
	<artifactId>inmemdb-maven-plugin</artifactId>
	<version>1.4.2</version>
	<configuration>
		<monitorPort>11527</monitorPort>
		<monitorKey>inmemdb</monitorKey>
		<daemon>true</daemon>
		<type>h2</type>
		<port>8082</port>
		<database>test</database>
		<username>${flyway.user}</username>
		<password>${flyway.password}</password>
	</configuration>
	<dependencies>
		<dependency>
			<groupId>com.h2database</groupId>
			<artifactId>h2</artifactId>
			<version>${h2.version}</version>
		</dependency>
	</dependencies>
	<executions>
		<execution>
			<id>start-db</id>
			<goals>
				<goal>run</goal>
			</goals>
			<phase>pre-integration-test</phase>
		</execution>
		<execution>
			<id>stop</id>
			<goals>
				<goal>stop</goal>
			</goals>
			<phase>post-integration-test</phase>
		</execution>
	</executions>
</plugin>

Here is our plugin configuration for spawning the h2 database on our pre-integration-test phase and stopping it in our post-ingration-test phase.

And finally with our h2 database spawned we can use the flyway plugin to do the initial migration

<plugin>
	<groupId>com.googlecode.flyway</groupId>
	<artifactId>flyway-maven-plugin</artifactId>
	<version>${plugin-version.flyway}</version>
	<executions>
		<execution>
			<phase>pre-integration-test</phase>
			<goals>
				<goal>clean</goal>
				<goal>init</goal>
				<goal>migrate</goal>
			</goals>
			<configuration>
			</configuration>
		</execution>
	</executions>
	<dependencies>
		<dependency>
			<groupId>com.oracle</groupId>
			<artifactId>ojdbc6</artifactId>
			<version>${ojdbc6.version}</version>
		</dependency>
		<dependency>
			<groupId>com.h2database</groupId>
			<artifactId>h2</artifactId>
			<version>${h2.version}</version>
		</dependency>
		<dependency>
			<groupId>${project.groupId}</groupId>
			<artifactId>db</artifactId>
			<version>${project.version}</version>
		</dependency>
	</dependencies>
	<configuration>
	</configuration>
</plugin>

Now our database is up and schema migrated and we are ready to deploy to our websphere server and start running our selenium integration tests via the maven failsafe plugin.

Bootstrapping the Websphere Liberty Profile Server

Now that we’ve gotten a database up and migrated we need a way to setup our test server, in this case websphere liberty profile, so that we can deploy the application and let our selenium tests run.

Again we are going to our pom.xml to

<plugin>
	<groupId>com.ibm.websphere.wlp.maven.plugins</groupId>
	<artifactId>liberty-maven-plugin</artifactId>
	<version>1.0</version>
	<executions>
		<execution>
			<id>pre-integration-setup</id>
			<phase>pre-integration-test</phase>
			<goals>
				<goal>start-server</goal>
				<goal>deploy</goal>
			</goals>
		</execution>
		<execution>
			<id>post-integration-setup</id>
			<phase>post-integration-test</phase>
			<goals>
				<goal>stop-server</goal>
			</goals>
		</execution>
	</executions>
	<configuration>
		<assemblyArtifact>
			<groupId>com.company</groupId>
			<artifactId>wlp-test-server</artifactId>
			<version>1.1</version>
			<type>zip</type>
		</assemblyArtifact>
		<configFile>${project.build.directory}/test-classes/server.xml</configFile>
		<appArchive>${project.build.directory}/webapp.war</appArchive>
		<timeout>60</timeout>
		<verifyTimeout>60</verifyTimeout>
	</configuration>
</plugin>

This plugin allows us to use start up websphere liberty profile server and deploy our war file automatically from maven. We’ve packaged up the server as a maven artifact and deploy it to a private repo; This server includes any necessary provided dependencies in its /lib folder before being zipped up.

Next the plugin allows us to use a server.xml file which is a configuration file for websphere. We have the following server.xml template that gets processed by maven during our build to set the correct database (h2 or oracle).

<server description="new server">
	<!-- Enable features -->

	<webContainer deferServletLoad="false" />

	<featureManager>
		<feature>jsp-2.2</feature>
		<feature>servlet-3.0</feature>
		<feature>localConnector-1.0</feature>
		<feature>jdbc-4.0</feature>
		<feature>jndi-1.0</feature>
		<feature>beanValidation-1.0</feature>
		<feature>jpa-2.0</feature>
	</featureManager>
	<httpEndpoint host="0.0.0.0" httpPort="9080" httpsPort="9443"
		id="defaultHttpEndpoint" />
	<applicationMonitor updateTrigger="mbean" />

	<dataSource id="db" jndiName="jdbc/datasource">
		<jdbcDriver libraryRef="driverLib" javax.sql.DataSource="${database.datasource.class}"/>
		<properties URL="${flyway.url}"
			password="${flyway.user}" user="${flyway.user}" />
	</dataSource>

	<library id="driverLib">
		<fileset dir="${wlp.install.dir}/lib" includes="${database.driver.jar}" />
	</library>
	<jndiEntry id="profile" jndiName="spring.profiles.active"
		value="DEV_SECURITY,DEV_SERVICES,CONTAINER_DB" />
</server> 

You’ll notice properites like database.datasource.class which were defined in our pom.xml.

Now all of our tests can be run during our build and we have no need to manually setup any databases or web servers to run our integration tests on.

Closing Comments

Now we have ways to easily developer each layer of our tests and have them run during our maven build with a single command. From here we could easily create CI jobs in jenkins to handle testing, reporting and deployments for us so that we can focus on developing our app and tests.

Brief Overview of Automation and Testing Tools

I’ve been asked recently to do a short overview of the automation and testing tools I’ve used on my current project. I’ve gotten a lot of time to play around with various test tools and configurations, as well as ways to automate the setup and configuration of a build system. We’ll be taking a quick look at what each tool provides us and list any pros/cons that I’ve run into.

Databases and tools

Flyway

What is it?

Flyway is a database versioning and migration tool. It gives us a standard way to build our database schema in a reproducible manner and thus helps us easily create reproducible integration tests. We supply it a list of folders with our scripts, and flyway executes the scripts for us and manages a “schema_version” table with the metadata about the scripts that have already run. It uses a specific file naming to achieve the script execution order, VXXXXX__script_name.sql where XXXXX is a number. Once new scripts are added, you run the migrate command and all scripts with a version number higher than the last executed script will run. It also has a maven plugin which allows us to easily manually trigger migrations and easy hook into a Jenkins build/deployment.

How we’ve used it

We’ve used Flyway to handle migrations on our DEV/SIT environments and use it to determine which scripts need to be packaged for our CAT/PROD environments since we don’t have permission to run database scripts in those. This multi env setup is made easy since flyway allows you to supply a list of folders and it’ll combine the scripts in them and run in order; we just supply different maven properties/profiles to change the list of folders (ie db-all,db-dev for dev, db-all,db-prod for prod) and database connection.

We’ve also used it to help maintain database state during integration tests. By combining, flyway, spring test, and Flyway Test Extensions we are able to clean and rebuild our database so that each test has a clean schema to run against. Dbunit also comes into play here for setting up our test data after flyway has cleaned and migrated the schema.

Pros

  • Simple and easy to use
  • Reproducible schemas
  • Database scripts committed to version control and versioned in the database
  • Scripts written in SQL
  • Good support for spring, testing, and maven

Cons

  • Can’t use in our upper envs (not a tool issue)
  • Possible issues with multiple users creating/committing scripts due to versioning nature.
    • If someone uses the same version number as you did there will be conflicts (this is why we use timestamps to reduce this chance).
    • Also for example someone commits a script with a higher version than mine and the migration runs, and then I commit my script. My script version is now lower than the latest in schema_version table. Developers will just need to be aware of this, and update their script versions if necessary. Another option is that flyway does have an ability to run scripts out of order, so that it will go back and pickup scripts with lower version that were added after the higher version ran, but then you do run into the chance of non reproducible schemas. Whether you enable this option or not it is something that you should be aware of when using flyway.

Other options

dbmigrate
Liquibase

Oracle/Postgresql/Mysql/Etc

What is it?

In this case, everyone should be familiar with at least one of these databases as they tend to be the most used. Most applications will connect to a database to persist data collected. We’ll mostly be talking about how we’ve used this database for our integration tests.

How we’ve used it

Our project uses the Oracle database, and we originally setup our tests to run against an oracle xe install. Our tests use flyway to connect to and manage the lifecycle of the database for our integration tests.

Pros

  • Same database your application will connect to in a live env
  • Allows for database specific features to be used (plsql, triggers, etc)

Cons

  • Additional setup required
  • Need to host database externally from tests
  • Slower than in memory database
  • Potentially slower depending on network connection
  • Database must be up and accessible to run tests

H2 Database

What is it?

H2 is a fast, lightweight sql database written in Java. It supports embedded and server modes and can be run completely in memory. It also supports syntax compatibility for a number of other databases.

How we’ve used it

Our integration test were originally setup to run against an oracle instance. This became an issue when automating our build due to poor network connection to our dev database test schema. To remedy this we updated our sql scripts to be compatible in both H2 and Oracle (this was minimal since we weren’t using any Oracle specific features and enabled H2s compatibility mode). We then added a maven profile that would use a maven plugin – inmemdb to start H2 in server mode and change the connection properties to this new H2 server instead of oracle. This way our test can be easily run against a full Oracle install or H2 with a single command line property.

We’ve also created a POC in a different branch of the code which uses Arquillian (more on that later) and H2 in embedded mode. I’ll be exploring setting up the H2 server as a spring bean instead of a maven plugin in the future.

Pros

  • Quick to setup using spring
  • No need to externally host
  • Initialized from spring/test (embedded)
  • Initialized during pre-integration maven phase or spring (server)
  • In memory mode very fast for integration tests that need to reset database often

Cons

  • Limited support for database compatibility (no plsql support and other database specific features)
  • Additional memory required in your jvm
  • Embedded mode only accessible from within same jvm its launched (remedied by running in server mode)

Testing

JUnit

What is it?

Again, this is another one most people are familiar with. JUnit is a framework for writing test code in Java. It provides the foundation of our tests, providing test lifecycle hooks, assertions, maven support, 3rd party extensions and more.

How we’ve used it

All our tests, unit and integration start off with a JUnit test and hook in the appropriate runner depending on the type of test we perform.

Our unit tests tend to use vanilla JUnit or with Mockito to allow for quick and easy dependency mocking.

We’ll go more into our integration tests setups shortly.

While JUnit has been a great tool, we’ll be looking at testNG in the future due to having better ways to parameterize tests.

Pros

  • Heavy used in the Java community and simple to use
  • Lots of 3rd party extensions
  • Maven support

Cons

  • Tends to behind most other frameworks in features

Other Options

TestNG
Spock

Spring Test

What is it?

Spring Test is a spring module meant to provide support and ease of testing spring enabled applications. It includes test runners for JUnit/TestNG, takes a spring java config/xml to build the application context, and supports spring profiles and more. It also provides extended support of testing Spring MVC by providing a way to test routing and assert various parts of the response (ie view return, status code returned, model attributes, etc)

How we’ve used it

For our database integration tests we’ve wired our test using the Spring test runner and loaded the entire core module of our project (services/repos/etc) with a spring defined datasource (normally the app uses a container supplied datasource through jndi). We then use spring profiles to ignore the jndi datasource and pickup our test datasource. We then use the flyway extension to execute flyway clean/migrate and dbunit to setup data to put our database in a known state and enable completely reproducible tests.

Pros

  • Actively supported and developed by the Spring Source teams
  • Test using spring, thus allowing the creation of reusable test beans
  • Provides additional information to tests that is not available when unit testing (mvc test features)
  • Allows for testing all spring components

Cons

  • Still not inside our servlet container
  • No access to container resources
  • Cannot test UI
  • Can’t test things controlled by the container (filters, web.xml mappings, etc)

Selenium Webdriver

What is it?

Selenium Webdriver is a tool which allows us to develop browser tests using code from just about any language. It uses the browsers internal API to find and interact with the browser; this is different from the old selenium where mouse and keyboard recording were used to interact with the browser. It also supports a wide variety of browsers, including mobile. See the selenium webdriver page for more.

How we’ve used it

We use selenium in a few different configurations so that we can tests as much as possible in our build process without deploying to a live environment to catch as many issues as possible before any deployment happens. We can also control our data using properties. We’ll discuss these configurations here.

First off, our configurations have very little duplication thanks to using Spring to wire our tests together. This allows us to create a common base class that setups spring/selenium and then creates and logs a user in for both environments. Let’s take a look at how each is setup.

Our dev environment is setup to mock out any external APIs, using database tables, to allow rapid prototyping and allow us to test the majority of the application without connecting to test environments of systems we don’t control. By doing this we can use DBUnit to populate the external data our system relies on and then run tests against. This environment configuration is controlled by Spring Profiles to enable swap out the real API calls with database writes and enable a web interface for creating mock data.

Now that you understand our dev environment we can talk about how selenium tests are run against it. As we talked about earlier you can back these tests with either an external database or H2 running in server mode since our application will be deployed into a container running in its own VM. For actually application deployment we have two options; First is that you have the application already deployed in a container running in eclipse, connected to the same database your tests will be connected to, this is general how you will develop new tests. Second is that we use maven to deploy our war during the pre-integration phase of our build. In this case we have a maven plugin that spawns a websphere liberty profile server and deploys the app before running our integration tests (our client uses websphere, but you can easily do this for other containers as well). Our spring profile then uses a NOOP implementation of the create user call in our setup (dbunit takes care of this data) and logs in using our dev panel (spring security database auth).

Now you understand how the tests are setup and executed, lets take a look at what we can and can’t test in this configuration. First we can test all UI features now as well as the majority of our functional tests (data creation, validations, etc). What we can’t test in this case are environment specific things, like connectivity between systems (firewalls/ports), webserver request modifications.

Next is our SIT configuration. In this configuration our app is deployed to our test servers (after dev integration test run/pass) and is run in its production configuration using http calls to external APIs. By the time we reach this point, the majority of our actual application testing is already covered. Since most of our integration test have already run during the build we’ll want to mainly test the happy paths to make sure all our api calls properly go through and we know the application is working properly. Again, for these tests we change our Spring profile to pickup different properties/test beans. In this profile our test is NOT connected to any database since all the data will be created through the application/external systems. So this time our create user instead of being NOOP we have it create user in the external system and then our login code is NOOP since the user is automatically logged in after creation.

We’ll discuss the test flow more when we talk about our Jenkins configuration.

Pros

  • Most of the test executed during build, errors caught fast
  • Using browser APIs allows for test closer to a real user experience
  • Tests are fully reproducible in the dev configuration due to database lifecycle control

Cons

  • Test are not run in container, and don’t have access to the container datasource and thus need to define their own that connects to the same as the container
  • Server must be started up outside of the test (IDE/Maven spawned/etc)

Arquillian

What is it?

Arquillian is a new tool in development by JBoss to allow for in-container tests. Arquillian allows for you to define deployments from within our JUnit/TestNG test and have it deployed to a servlet container. It has a lot of extension support for spring, persistence, various servlet containers and extensions that add abstraction on top of selenium. Arquillian itself is still in early development stages with extensions slightly behind the core module.

How we’ve used it

We’ve used Arquillian to develop a proof of concept to allow all our integration tests to be run full self contained. We use the Tomcat container, spring extension, and persistence extension. We added a @Deployment to our selenium test base class, which packages up our war, and changed the spring runner to the arquillian runner. Combined with a embedded h2 database we are then able to run test fully self contained (in our development profile) without having to have any external resources already started.

The flow of a test in this configuration is as so.

  1. Arquillian/Spring extenion packages up our app with all maven dependencies (including test/provided but this is configurable)
  2. Arquillian extracts an embedded Tomcat server and starts it up
  3. Arquillian deploys the war to the embedded tomcat server
  4. Spring starts up and connects to the database
  5. Test starts, uses the same Spring context that is running in your tomcat deployment, has access to everything running on tomcat including jndi resources
  6. Test runs, passes and everything shutsdown
  7. Repeat

As you can see the main benefit of this setup is that you just hit run and arquillian takes care of the rest. This reduces a lot of extra configuration and container lifecycle management on our part and lets test run as a 100% real test inside our container.

The only downside at this point is that each test class is going to redeploy the tomcat/application which greatly increases our test times (and the main reason we haven’t merged this POC into trunk). Luckily the Arquillian developers are already aware of this issue and are planning to allow for test suite support to reduce these runtimes ( see https://issues.jboss.org/browse/ARQ-567 ).

Pros

  • Tests completely control their own container/test lifecycles
  • No need for external resources to be in place before tests run
  • Full access to spring context running on serve and inside tests
  • Test have access to jndi and other container resources
  • Supported by JBoss
  • Lots of additional support through extensions

Cons

  • Slow test times due to container redeployment per test class
  • Early stages of development, thus API changes common
  • Increased memory on the JVM running tests with embedded containers

Build Environment

Vagrant

What is it?

Vagrant is a tool, written in Ruby, that provides a useful abstraction for multiple VM providers (VirtualBox, VMWare, EC2, etc) and allows for easy VM configuration and startup from the commandline. It also provides hooks for provisioners (such as Chef) and other extensions in the form of plugins. It also allows us to define VMs in code allowing us to check them into version control.

How we’ve used it

We’ve used Vagrant with to setup an Ubuntu 13.04 server, mount some shared folders, and configure networking. We used the vagrant-omnibus plugin to manage our Chef install and the vagrant-berkshelf plugin to manage copying our chef dependencies to the VM before provisioning. This is all configured in a Vagrantfile and started up by running the “vagrant up” command.

After this initial setup I developed the chef cookbook which we’ll talk about in a few. This allowed me to easily develop chef scripts iteratively by adding new functionality and testing it with “vagrant provision” to rerun the cookbook. We’ll discuss this development flow more when we talk about Chef.

Pros

  • Simplifies VM configuration and provisioning
  • VMs defined as code
  • Easy development and testing of Chef cookbooks
  • Good community support

Cons

  • Evolvoing API and tools

Chef

What is it?

Chef is a tool with a focus on IT infrastructure automation and management. Chef comes in two flavors, first a full server/client setup with the intent of fully managing your IT and a standalone version called chef-solo that focuses on executing chef cookbooks with out server management side. Chef provides abstractions for installing and executing scripts and other common OS operations. Its written in Ruby, and can execute Ruby code which give’s it a large additional library of useful scripting tools.

You develop Chef Cookbooks which define a idempotent procedural way of installing a piece of software. This basically means you develop cookbooks that can be rerun over and over and your system will always end up in the same state without error. This development style, combined with Vagrant, allows for us to quickly and easily develop cookbooks in small focused chunks that we can rerun over and over as we add features. This also allows you to easily deploy an update to a large number of servers at once with minimal chance of error.

Chef cookbooks can be linted, unit tested, and integration tested to verify working code. Combined with Vagrant and Jenkins you can setup a continuous integration server for chef cookbooks.

Chef also comes with the benefit that it is being used as the base of amazon web services opsworks platform so that you can execute custom chef scripts easily.

How we’ve used it

We used Chef, along with Vagrant and berkshelf (chef cookbook dependency manager), to develop a Jenkins build box for our app. The build box installs the following software to support the build/test configurations we’ve been talking about.

  1. install/update apt
  2. install java 7
  3. setup dns-search domains and restart networking
  4. install jenkins server
  5. install git and svn
  6. setup svn proxy
  7. download and store svn certs
  8. install maven
  9. install chrome
  10. install xvfb (headless selenium tests)
  11. install chromedriver for selenium test
  12. configure jenkins server

Now most of the software above has a Chef cookbook already developed that can run against multiple OSes (for example java cookbook supports debian/rhel/window with multiple versions 6/7/ibm/oracle/etc). And all cookbooks can be parameterized, allowing for high reusability.

And since everything is defined in code, we can of course check it into version control.

Pros

  • Infrastructure as Code/Version Control
  • Automate installation and configuration of machines
  • Reduce error due to repetition
  • Continuous integration/Testable
  • Quickly get up identical environments test/prod mirror/etc
  • Much smaller than VMs
  • Large base cookbook library
  • AWS Opworks support
  • Strong community support
  • Ruby Libraries

Cons

  • Provisioning time when starting from clean VM (more of issue for things like aws autoscaling)

Jenkins

What is it?

Jenkins is a tool written in Java that provides a continuous integration environment for projects. It has a very large plugin library and provides scripting support via the Groovy language. Due to this Jenkins is one of the most popular CI software. As with most things, there is a lot more the Jenkins than what we’ll be discussing here.

The goal of Jenkins is to provide an interface for developing build/deployment pipelines, task automation, reporting and more. It has support for most major build tools including maven which we’ll discuss. Jobs can be trigger manually, scheduled, off a version control hooks, polling a version control for updates, off of over builds completing, and more.

How we’ve used it

As we talked about before we used Chef to develop our Jenkins server. This required that we figure out how Jenkins was managing its configuration. Jenkins has a number of xml/json/text files that it and its plugins use to persist configuration changes. First I inited a git repo in my Jenkins home folder and then proceeded to commit changes as I updated the Jenkins configuration. This allowed me to track and add all the configuration updates to my cookbook. This itself seems like a large hassle due to just the sheer amount of configuration files and eventually will most likely need to be changed over to using the scripting console to configure and maintain the server.

By default Jenkins comes with a plugin to perform maven builds. We took advantage of this since the project is a maven project. We have two jobs configured in jenkins.

The first job performs CI for our trunk branch. This job polls for changes in svn. Once a change is picked up, Jenkins checks out the source code and performs out maven build command to run our full (dev) integration test suite. We either end up with a successful build (all tests passed), unstable build (all unit tests passed but integration test failures), or a failed build (compilation errors or unit test failures).

The second job, uses the buildresult-trigger plugin. The goal of this job is to tag and deploy the app to our test servers. This job triggers twice a day if it sees the first job with a new build in a successful state. If so this job will handle database migration using maven/flyway, handle websphere deployment using the websphere admin clinet, and then execute our happy path integration tests to make sure everything deployed correct. If the first job is not in success status, then this job will not run and thus prevent a build with integration test errors being deployed.

Pros

  • Mature CI platform
  • Tons of Community support
  • Support for just about anything through plugins
  • Scriptable

Cons

  • Text file configuration hell (most likely solved by using scripting console)
  • Doesn’t seem to be a ton of info on scripting outside of the javadocs

A look at Bean Validation API scenarios.

Recently I had to start up a new Java project and opted to use the new Java EE Bean Validation, specifically the Hibernate Validation implementation. We’ll also be using it in the context of a Spring 3 application which will give us some extra support for triggering and handling validations.

First add your maven dependency, this is the lasted version implementing the 1.0 spec.

	<dependency>
		<groupId>org.hibernate</groupId>
		<artifactId>hibernate-validator</artifactId>
		<version>3.4.1.Final</version> 
	</dependency>

Next in our Spring Configuration class

@Configuration
@EnableWebMvc
public class MvcConfig extends WebMvcConfigurerAdapter {
	/**
	 * Registering our validator with spring mvc for our i18n support
	 */
	@Override
	public Validator getValidator() {
		try {
			return validator();
		} catch (final Exception e) {
			throw new BeanInitializationException(
					"exception when registering validator in "
							+ this.getClass().getName(), e);
		}
	}

	/**
	 * We use springs reloadable message resource and set it to refresh every
	 * hour.
	 * 
	 * @return
	 */
	@Bean
	public MessageSource messageSource() {
		final ReloadableResourceBundleMessageSource messageSource = new ReloadableResourceBundleMessageSource();
		messageSource.setBasenames("/WEB-INF/messages/validation",
				"/WEB-INF/messages/text", "/WEB-INF/messages/label",
				"/WEB-INF/messages/popover");
		messageSource.setCacheSeconds(3600);
		return messageSource;
	}

	/**
	 * This is the validator we will provide to spring mvc to handle message
	 * translation for the bean validation api (hibernate-validation)
	 * 
	 * @return
	 * @throws Exception
	 */
	@Bean
	public LocalValidatorFactoryBean validator() throws Exception {
		final LocalValidatorFactoryBean bean = new LocalValidatorFactoryBean();
		bean.setValidationMessageSource(messageSource());
		bean.setProviderClass(HibernateValidator.class);
		return bean;
	}
}

Here we are creating a factory bean with HibernateValidator as the provider class, and we’ve passed in our Spring messageSource bean to handle error message resolving. Finally we register the validator with spring through its WebMvcConfigurerAdapter interface and overriding its getValidator method. Now we are ready to start validating our data.

First lets take a look at our input objects and then we’ll break down whats going on.

@GroupSequenceProvider(CapsAccountFormGroupSequenceProvider.class)
public class CapsAccountForm {
	@NotNull(message = "{required}")
	private PaymentType paymentType;

	@Valid
	private BankAccount bankAccount;

	@Valid
	private AlternateContact alternateContact;

	@AssertTrue(message = "{please.agree}")
	@NotNull(message = "{required}")
	private Boolean agreement;
	//other getters and setters
public final class BankAccount {
	@BankABANumber
	private String bankAccountNumber;

	@NotBlank(message = "{required}")
	@Size(max = 17, message = "{max.seventeen.digits}")
	@Pattern(regexp = "[0-9]*", message = "{numeric.only}")
	private String checkingAccountNumber;

	@NotBlank(message = "{required}")
	@Length(max = 40, message = "{length.exceeded}")
	private String bankName;

	@Length(max = 28, message = "{length.exceeded}")
	private String city;

	private State state;

	@Pattern(regexp = "[0-9]{5}|[0-9]{9}", message = "{zip.length.exceeded}")
	private String zipCode;

	@Pattern(regexp = "[0-9]*", message = "{numeric.only}")
	@Length(max = 10, message = "{length.exceeded}")
	private String phoneNumber;

	@Pattern(regexp = "[0-9]*", message = "{numeric.only}")
	@Length(max = 10, message = "{length.exceeded}")
	private String faxNumber;

	@Length(max = 40, message = "{length.exceeded}")
	private String bankAddress;

	//getters and setters
}
public final class AlternateContact {
	@Length(max = 100, message = "{length.exceeded}")
	public String name;

	@Length(max = 20, message = "{length.exceeded}")
	@Pattern(regexp = "[0-9]*", message = "{numeric.only}")
	public String phone;
	//getters and setters.

Now the above code is used for the following business scenario. First off we are writing to a legacy table to support automation of a current multi step process in a legacy system, so most of our constraints must follow what their tables have currently defined (that should explain some of the random field length checks). Next, the original form has the ability to have two different paymentType options which then affect if you need to populate the bankAccount form or not. If the paymentType is D (Debit) then we need the bankAccount form and process its validations, and if the paymentType is C (Trust, legacy code) then we don’t need to include the bankAccount form and will not process its validations. Finally there is the alternateContact form which is optional and the agreement field which is a required to be checked/True.

First we have the basic validation annotations provided to us by bean validation and hibernate validator. Ones provided by the spec are common to all implementations and hibernate provides a few extra implementation specific ones. I’d say these are mostly self explanatory but we can take a quick look at a few.

All the annotations take a message parameter which if wrapped in curly braces will be evaluated through the spring messageSource otherwise the text given will be the message returned.

The most commonly used annotations will be the @NotNull, @NotBlank (for Strings), @NotEmpty (for Collections) @Pattern, @Length, and @Size. Each of these annotations take various parameters to be used during evaluation, for example the regex field on @Pattern is the regex pattern that the string must match to be considered valid.

Next lets take a look at how our validations are triggered and how the current Spring validation integrates.

The @Valid annotation is used to trigger the bean validation. This will generally be applied to a Spring Controller method parameters. It can also be used to trigger nested evaluations objects like we have in the above code to validate the bankAccount and alternateContact fields. Let’s take a look at the create controller method for our form.

	@RequestMapping(value = "/crids/{crid}/caps", method = RequestMethod.POST)
	public String createCaps(@PathVariable final String crid,
			@Valid @ModelAttribute final CapsAccountForm capsAccountForm,
			final BindingResult errors, final Model model,
			final RedirectAttributes redirectAttributes)

As we said above, user information is pulled from the current user and company info is pulled from the url (crid PathVariable above). Here you can see our CapsAccountForm as a parameter and it annotated with @Valid. When used in a spring controller like this any validations that are triggered will automatically be populated in the BindingResult object for us. If you’ve used spring before you know that in the past you generally implemented the spring interface and populated the BindingResults object on your own.

What if we want to use this same form, but for a restful service instead of a web form? Let’s take a look at our API’s creation endpoint.

	@RequestMapping(value = "/api/crids/{crid}/caps/", consumes = {
			"application/json", "application/xml" }, produces = {
			"application/json", "application/xml" }, method = RequestMethod.POST)
	public @ResponseBody
	CreateCapsAccountOutput createCaps(@PathVariable final String crid,
			@Valid @RequestBody final CapsAccountForm capsAccountForm ) {

As you can see this method is very similar to the previous one, with the exceptions of we’ll be accepting Json/Xml as the input and producing the same. Again the @Valid annotation will trigger and validations on our objects, but this time we have no BindingResult object to be populated, so what will happen this time? Instead it will throw a MethodArgumentNotValidException which we can then handle in a spring ExceptionHandler method.


	@ExceptionHandler(MethodArgumentNotValidException.class)
	@ResponseStatus(value = HttpStatus.BAD_REQUEST)
	public @ResponseBody
	HttpValidationMessage validation(final MethodArgumentNotValidException e,
			final Locale locale) {
		return new HttpValidationMessage(e.getBindingResult(), messageSource,
				locale);
	}

Here we catch any MethodArgumentNotValidException thrown by the given controller (or any controller if this is a global handler in a @ControllerAdvice class) and we pull out hte bindingResults object that is attached to it to populate a message to be returned with a 400 error code.

Now that we’ve seen the basics, let’s look at creating a new custom constraint. If you noticed on the above bankAccount class we are using the annotation BankABANumber which is a customer constraint a I created. Lets take a look at some of the code it is triggering.

@Target({ METHOD, FIELD, ANNOTATION_TYPE })
@Retention(RUNTIME)
@ReportAsSingleViolation
@Constraint(validatedBy = BankABANumberValidator.class)
@Documented
@NotBlank
@Size(min = 9, max = 9)
@Pattern(regexp = "[0-9]*")
public @interface BankABANumber {

	String message() default "{caps.account.aba.number.invalid}";

	Class<?>[] groups() default {};

	Class<? extends Payload>[] payload() default {};
}
public class BankABANumberValidator implements
		ConstraintValidator<BankABANumber, String> {

	private ABANumberCheckDigit abaNumberCheckDigit;

	@Override
	public void initialize(final BankABANumber arg0) {
		abaNumberCheckDigit = new ABANumberCheckDigit();
	}

	@Override
	public boolean isValid(final String arg0,
			final ConstraintValidatorContext arg1) {
		return abaNumberCheckDigit.isValid(arg0);
	}

}

First lets look at the annotation. Here you can see we’ve actually combined a few existing validations and applied the @ReportAsSingleViolation annotation. What this will do is instead of showing a different message for each validation, instead it will only report a single message, which in this case is the default we’ve supplied of {caps.account.aba.number.invalid}. So if the field is blank, incorrect size, not a digit, or not a valid routing # then we’ll see a single error message.

Next since we aren’t only wrapping existing annotations, lets look at how we actually determine the routing number is valid. As you can see on our annotation we are supplying the @Constraint with the BankABANumberValidator class as its validation implementation. This class is an implementation of the provided ConstraintValidator interface which takes generics of the annotation and field type we are validating. From here you can access any fields set on the annotation and the value of the field we are validating. We are using Apache Validation that provides us with the algorithm to validate a routing #. I won’t go into the specifics of this validation, but you can easily look it up. So on top of our standard validations, this code as gets executed during validation and simply returns a boolean.

Note: You are able to use springs aspectj @Configurable injection support to inject spring beans into this class. This would allow for dynamic checks against a database or webservice.

Finally lets take a look at one last scenario. If you remember earlier I said we only evaluate the bankAccount object if the paymentType is set to Debit. Now this scenario is slightly more complicated but definitely useful. Let’s look at the CapsAccountForm class to see what exactly we are doing to achieve this validation setup.

On the CapsAccountForm class we have the following annotation @GroupSequenceProvider(CapsAccountFormGroupSequenceProvider.class). Now lets take a look at the code for this before we get into explaining what is happening.

	public List<Class<?>> getValidationGroups(final CapsAccountForm object) {
		final List<Class<?>> groups = Lists.newArrayList();
		groups.add(CapsAccountForm.class);
		if (object.getPaymentType() != null
				&& object.getPaymentType().equals(TrustPaymentMethod.D)) {
			groups.add(BankAccount.class);
		} else {
			object.setBankAccount(null);
		}
		return groups;
	}

First let’s explain what group sequences are. Each validation annotation can take a group parameter which can be used to evaluate only specific groups, or groups in a specific order so that if an earlier group fails a validation check, then the rest are left unevaluated. If you know beforehand which set of groups you want you can use the @Validated annotation on your spring method and supply it a list of classes/interfaces which correspond to the group values.

Now, since our groups are based off the paymentType field we don’t know our groups up front, so we can use the hibernate specific @GroupSequenceProvider to determine our groups before validation the class. Let’s take a look at our sequence provider CapsAccountFormGroupSequenceProvider.

First you notice we add the CapsAccountForm to our groups, since it expects the class its evaluating to be in its group list. Next we add the BankAccount class to our groups IF our paymentType is D, otherwise we ignore it and throw away anything by setting the field to null.

Now we either have a single group, the CapsAccountForm or two groups the CapsAccountForm and BankAccount. Lets look at the path where we only have the CapsAccountForm group. Since the bankAccount field has @NotNull with the group BankAccount it will only be evaluated if the paymentType is D. We don’t have the BankAccount class in our groups this check is ignored, and the @Valid annotation is no processed since our bankAccount field should be null if the paymentType is C. Now if the paymentType is D, then we add the BankAccount group and make sure the bankAccount field is not null, and then proceed to process the remaining fields in the BankAccount object.

BVal is in its earlier stages and will continue to add more support but hopefully seeing these situations and how we adapt bval to them helps you in the future when validation more complex scenarios.

From the Browser to the Database – Using multiple frameworks to get the job done

We are going to take a look at the technologies and code used to go from a form in a web browser to saving that data in our database.

The technology stack we are using for this overview includes

  • HTML5
  • Bootstrap 3
  • Sitemesh 2.4
  • Servlet 3
  • Spring 3.2
  • Spring MVC 3.2
  • Spring Data JPA 1
  • Bean Validation 1
  • JPA2/Hibernate4
  • Relational Database/Oracle 11g

We are going to review some code that is used to mock out the Customer Registration API for external postal applications.

First lets design our data model and create a script to create some database objects.


DECLARE
	v_sql LONG;
BEGIN

	begin
		v_sql := 'create table dev_users (
		id number primary key,
		username varchar2(255) not null,
		password varchar2(255) not null,
		description varchar2(255) not null,
		enabled integer not null,
		first_name varchar2(255) not null,
		last_name varchar2(255) not null,
		email varchar2(255) not null,
		phone_number varchar2(10) not null,
		phone_number_ext varchar2(4),
		address_line1 varchar2(255) not null,
		address_line2 varchar2(255),
		address_line3 varchar2(255),
		city varchar2(50) not null,
		state varchar2(2) not null,
		postal_code varchar2(5) not null,
		created_time timestamp not null
	)';
	execute immediate v_sql;
	exception when others  then
		IF SQLCODE = -955 THEN
	        NULL; -- suppresses ORA-00955 exception
	      ELSE
	         RAISE;
	      END IF;
	 end;
	 begin
		v_sql := 'create unique index dev_users_username_idx on dev_users(username)';
		execute immediate v_sql;
	exception when others  then
		IF SQLCODE = -955 THEN
	        NULL; -- suppresses ORA-00955 exception
	      ELSE
	         RAISE;
	      END IF;
	 end;
	 begin
		v_sql := 'create sequence dev_users_seq CACHE 50';
		execute immediate v_sql;
	exception when others  then
		IF SQLCODE = -955 THEN
	        NULL; -- suppresses ORA-00955 exception
	      ELSE
	         RAISE;
	      END IF;
	 end;
END;
/

You should notice right away that we are wrapping our create statements in an oracle procedure. This allows us to catch object already exists exceptions and allow our script to be idempotent, which just means it can be rerun over and over without any errors or side affects.

So we create a table that holds some basic user information, it has a single integer id based off a sequence which is a best practice to use a non business related key on our data and reduces how many columns we need to join across. We then create a unique index on our natural key the username which we will generally use to query specific user’s data. Also we remember to add our not null and type size constraints in the table definition.

After you’ve migrated the database (I’ve been using flyway for database management) with the new table then we can move onto creating our JPA object to load the data. I’ve added some additional comments explaining what some of the annotations do for us. Here we’ll also see our first look at the bean validation constraints.


package com.usps.ach.dev.user;
//imports

/**
 * This is a database representation of the custreg account api.
 *
 * @author stephen.garlick
 * @deprecated for use only in the dev profile
 */
@Entity //marks this class a jpa entity
@Immutable //marks it so an entity cannot be updated
@Cacheable //enables entity caching against the @Id field
@Cache(usage = CacheConcurrencyStrategy.READ_ONLY, region = "devUser") //tells jpa to use a readonly cache and to use the devUser  cache defined in our ehcache.xml file
@NaturalIdCache //setups a cache using the fields marked with @NaturalId as the key
@Table(name = "dev_users") //override the base table name, the configured hibernate will use dev_user as default, but we like to pluralize our table names
@SequenceGenerator(sequenceName = "dev_users_seq", name = "dev_users_seq", allocationSize = 1) //sets up id generation based on the sequence we created. make sure allocationSize = 1 unless you plan on making your sequences increment more than 1 at a time.  this also makes sure hibernate doesnt use its HiLo strategy
@Deprecated //we just mark thsi deprecated so that we get a strong warning if someone tries to use this code in a non dev env.
public final class DevUser {
	@Id
	@GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "dev_users_seq") //let hibernate generate our ids
	private Long id;

	@NaturalId
	@NotBlank(message = "{please.enter}") //bean validation constraint that checks if the field is null or empty string.  the message is a reference to a messagesource key used for i18n
	@Length(max = 255, message = "{length.exceeded}") //constraint that checks the fields length
	@Pattern(regexp = "[a-zA-Z0-9]*", message = "{alphanum.only}") //pattern to make sure the field only has alphanumeric characters
	@Column(unique = true)
	private String username;

	@NotBlank(message = "{please.enter}")
	@Length(max = 255, message = "{length.exceeded}")
	private String description;

	@NotBlank(message = "{please.enter}")
	@Length(max = 255, message = "{length.exceeded}")
	private String firstName;

	@NotBlank(message = "{please.enter}")
	@Length(max = 255, message = "{length.exceeded}")
	private String lastName;

	@NotBlank(message = "{please.enter}")
	@Length(max = 255, message = "{length.exceeded}")
	@Email(message = "{email.invalid}")
	private String email;

	@NotNull(message = "{please.enter}")
	@Length(min = 10, max = 10, message = "{please.enter.digit.length}")
	@Pattern(regexp = "[0-9]*", message = "{numeric.only}")
	private String phoneNumber;

	@Length(max = 4, message = "{length.exceeded}")
	@Pattern(regexp = "[0-9]*", message = "{numeric.only}")
	private String phoneNumberExt;

	@NotBlank(message = "{please.enter}")
	@Length(max = 255, message = "{length.exceeded}")
	private String addressLine1;

	@Length(max = 255, message = "{length.exceeded}")
	private String addressLine2;

	@Length(max = 255, message = "{length.exceeded}")
	private String addressLine3;

	@NotBlank(message = "{please.enter}")
	@Length(max = 50, message = "{length.exceeded}")
	@Pattern(regexp = "[a-zA-Z]*", message = "{alpha.only}")
	private String city;

	@NotNull(message = "{please.enter}")
	@Length(min = 2, max = 2, message = "{please.enter.alpha.length}")
	@Pattern(regexp = "[a-zA-Z]*", message = "{alpha.only}")
	private String state; //TODO: pull this out into an ENUM instead of a 2 char string

	@NotNull(message = "{please.enter}")
	@Length(min = 5, max = 5, message = "{please.enter.digit.length}")
	@Pattern(regexp = "[0-9]*", message = "{numeric.only}")
	private String postalCode;

	private Date createdTime;
	private String password;
	private Integer enabled;

	@OneToOne(fetch = FetchType.EAGER, cascade = CascadeType.ALL) //we create a mapping to our real user table so we can create a user object when a dev user is created
	@PrimaryKeyJoinColumn(referencedColumnName = "id") //one to one mapping using both tables primary key column (they will have same primary key in both tables)
	private User user;

	@PrePersist //jpa callback to add the time as our createdTime field
	public void createTime() {
		this.createdTime = new Date();
	}

	//make sure to override hash and equals to be against our natural key instead of using instance based.  also don't use the id field since it may not be set
	@Override
	public int hashCode() {
		final int prime = 31;
		int result = 1;
		result = prime * result
				+ ((username == null) ? 0 : username.hashCode());
		return result;
	}

	@Override
	public boolean equals(Object obj) {
		if (this == obj)
			return true;
		if (obj == null)
			return false;
		if (getClass() != obj.getClass())
			return false;
		DevUser other = (DevUser) obj;
		if (username == null) {
			if (other.username != null)
				return false;
		} else if (!username.equals(other.username))
			return false;
		return true;
	}
//getters and setters
}

Hibernate’s Improved Naming Strategy allows us to convert fields that are camel cased to database tables by converting everything to lowercase and inserting underscores before uppercase letters. Example firstName -> first_name, lastName->last_name, addressLine1 -> address_line1.

Next we use spring data to create a dynamic DAO for us, effectively eliminating the need for us to create our own dao layer


package com.usps.ach.dev.user;

import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.stereotype.Repository;

/**
 * Spring data for manipulating our {@link DevUser}
 *
 * @author stephen.garlick
 * @deprecated only for use in the dev profile
 */
@Deprecated
@Repository
public interface DevUserRepository extends JpaRepository<DevUser, String> {
	public DevUser findByUsername(String username);

}

If you look at the JpaRepository you can see that it has a standard set of methods like findAll() find(ID) save(Entity) and other CRUD methods. We can create our own set of queries by simply adding a new method to our interface and spring data will generate the necessary JPA code for us. Here since we generally will be looking up users based on their username instead of their id we add a findByUsername method. You can do this with anyfield, ex. findByField1AndField2OrField3. If you have the spring data plugin for eclipse it will let you know if you’re methods are valid. There are a lot more advance techniques you can use here such as named queries and overriding the base implementation, but for basic crud operations this is the simplest form. See the spring-data-jpa documentation if you’d like to know more.

Next we’ll take a look at our service layer. The service layer’s job is to handle our transactions, possibly caching and to handle any extra logic necessary. I’ve excluded the interface and a few methods for brevity.


package com.usps.ach.dev.user;

//imports

/**
 * See {@link DevUserService}
 *
 * @author stephen.garlick
 * @deprecated this class is only for use in the dev profile
 */
@Profile(Profiles.DEV_SERVICES) //make sure this bean is only loaded when spring.profiles.active=DEV_SERVICES
@Service //marks this class for spring to create and inject
@Transactional //tells spring all methods in this class are to be run in transactions
@Deprecated //again deprecate so we can give a strong warning if anyone uses this outside the dev profiles
public final class DevUserServiceImpl implements DevUserService {

	@Autowired //wire in our repo by type
	private DevUserRepository devUserRepository;

	@Override
	public DevUser create(final DevUser devUser) {
		devUser.setPassword(devUser.getUsername());
		devUser.setEnabled(1);
		User user = new User();
		user.setUsername(devUser.getUsername());

		devUser.setUser(user); //since we have cascading set to ALL on our DevUser object the User object will automatically be persisted on save.  if you dont have cascade of ALL/Persist //then we'll get an exception here saying User is a transient object.
		return save(devUser);
	}

	private DevUser save(final DevUser devUser) {
		return devUserRepository.save(devUser);
	}

	@Override
	public boolean exists(final DevUser devUser) {
		return null != find(devUser.getUsername()); //return a boolean if a given entity is already created
	}

	@Override
	public DevUser find(final String username) {
		return devUserRepository.findByUsername(username); //user our defined method to query for our DevUser
	}

	@Override
	public Collection<DevUser> findAll() {
		return devUserRepository.findAll(); //delegate straight to our repo
	}
}

Finally we can look at our Spring MVC controller which will handle the incoming requests from the servlet container. We’ll want to follow the basic principals of REST in that we properly use the verbs GET to retrieve, POST to create, PUT to update, and DELETE to destroy. We’ll also use the proper HTTPStatus codes where applicable (ie user not found returns 404 page).

URLs will be designed as access to our entity resources. Since this a dev form we’ll be under the /dev/ namespace. As you’ll see below we use GET /dev/users/ to show all users, GET /dev/users/{username} to access a specific user and GET /dev/users/new to a form where new users are created and a POST /dev/users with the form data will create a new user. We won’t allow editing or deleting but if we did we would map /dev/users/{username}/edit which would have a form to do a PUT on /dev/users/{username} and a DELETE on /dev/users/{username} to destroy a resource.

Also we’ll be using our DevUser entity for form binding and validation in this tier.


package com.usps.ach.dev.user;
//imports

/**
 * This class handles creation of new {@link DevUser} to be used for logging in
 * under our dev profile. it is also intended to mock out the custreg account
 * api in our dev profile
 *
 * @author stephen.garlick
 * @deprecated only for use in the dev profile
 */
@Controller //marks this to be registered by spring
@Profile(Profiles.DEV_SERVICES) //makes it only picked up in the dev profile
@Deprecated //marked deprecated to hide deprecated warnings from our dependencies
public final class DevUserController {

	@Autowired
	private DevUserService devUserService;

	@Autowired
	private DevCridService devCridService;

	@RequestMapping(value = "dev/users/new", method = RequestMethod.GET)
	public String getNew(@ModelAttribute final DevUser devUser,
			final Model model) {
		return "dev/user/new";  //return the view for this new form
	}

	@RequestMapping(value = "dev/users", method = RequestMethod.GET)
	public String getIndex(final Model model) {
		setup(model);
		return "dev/user/index"; //return the index page for our users
	}

	@RequestMapping(value = "dev/users/{username}", method = RequestMethod.GET)
	public String viewUser(@PathVariable final String username,
			@ModelAttribute DevCrid devCrid, final Model model) {
		final DevUser devUser = devUserService.find(username);
		if (null == devUser)
			throw new NotFoundException(); //if null is returned for the user then it doesnt exists and we shoudl return 404
		final Collection devCrids = devCridService.findAll();
		model.addAttribute("devUser", devUser);
		model.addAttribute("devCrids", devCrids);
		return "dev/user/view";
	}

	private void setup(Model model) {
		model.addAttribute("users", devUserService.findAll());
	}

	/**
		Here we are creating a new user.  The user data will be passed in this paramter @Valid @ModelAttribute final DevUser devUser and the @Valid annotation will trigger all the bean valdiaton on our entity object.  If errors are found then spring will automatically populate them in the final BindingResult errors for us.
	**/
	@RequestMapping(value = "dev/users", method = RequestMethod.POST)
	public String create(@Valid @ModelAttribute final DevUser devUser,
			final BindingResult errors, final Model model,
			final RedirectAttributes redirectAttributes) {
		final boolean userExists = devUserService.exists(devUser);//do a manual valdation to make sure the user does not already exists.  this is not part of bean validation spec yet
		if (userExists)
			errors.rejectValue("username", "",
					"A user already exists with this username.");
		if (errors.hasErrors()) {
			return "dev/user/new"; //return the new form and display any errors
		}
		//otherwise create the user and redirect to its newly created view page.  Generally you want to redirect after a non GET operation.
		DevUser newUser = devUserService.create(devUser);
		redirectAttributes.addAttribute("username", newUser.getUsername());
		return "redirect:/dev/users/{username}";
	}
}

You may have noticed that all the classes we’ve covered are in the same package. This benefits us by allowing all related classes with a piece of functionality to be logically organized and makes it easier to find classes that are related to each other. This package naming scheme is preferable over the style of using packages to group components together (ie xx.xx.xx.controller xx.xx.xx.model etc).

Finally we can take a look at the views that we are returning from our spring controller which will allow the user to actually enter the information in the browser and submit it to our server.

users index /dev/users

<%@include file="/WEB-INF/jsp/shared/common-taglibs.jsp"%>
<!DOCTYPE html>
<html>
<head>
<title>Development Users</title>
</head>
<body id="users">
<div>
		<a href="<c:url value="/dev/users/new"/>"
			class="btn btn-primary btn-sm">Create New User</a></div>
<div>
<table class="table">
<thead>
<tr>
<td>Username</td>
<td>Description</td>
<td>Login</td>
</tr>
</thead>
<tbody>
				<c:forEach items="${users}" var="devUser">
<tr>
<td><a class="btn btn-primary btn-sm"
							href="<c:url value="/dev/users/${devUser.username}"/>">${devUser.username}</a></td>
<td>${devUser.description}</td>
<td><form action="login/process" method="POST"><input name="j_username" value="${devUser.username}" type="hidden"/><input name="j_password" value="${devUser.username}" type="hidden"/><button class="btn btn-primary btn-sm">Login</button></form></td>
</tr>
</c:forEach></tbody>
</table>
</div>
</body>
</html>

users new /dev/users/new

<%@include file="/WEB-INF/jsp/shared/common-taglibs.jsp"%>

<!DOCTYPE html>
<html>
<head>
<title>Development Create New User</title>
</head>
<body id="users">
<h3>Create a New User</h3>
<h5>The password will be the same as the username</h5>
<form:form commandName="devUser" method="POST" servletRelativeAction="/dev/users/"
			cssClass="form-horizontal">
			<t:bootstrap-text-input path="username" maxLength="255" label="Username" required="true"/>
			<t:bootstrap-text-input path="description" maxLength="255" label="Description" required="true"/>
			<t:bootstrap-text-input path="firstName" maxLength="255" label="First Name" required="true"/>
			<t:bootstrap-text-input path="lastName" maxLength="255" label="Last Name" required="true"/>
			<t:bootstrap-text-input path="email" maxLength="255" label="Email Address" required="true"/>
			<t:bootstrap-text-input path="phoneNumber" maxLength="10" label="Phone Number" required="true"/>
			<t:bootstrap-text-input path="phoneNumberExt" maxLength="4" label="Phone Number Extension"/>
			<t:bootstrap-text-input path="addressLine1" maxLength="255" label="Address Line 1" required="true"/>
			<t:bootstrap-text-input path="addressLine2" maxLength="255" label="Address Line 2"/>
			<t:bootstrap-text-input path="addressLine3" maxLength="255" label="Address Line 3"/>
			<t:bootstrap-text-input path="city" maxLength="50" label="City" required="true"/>
			<t:bootstrap-text-input path="state" maxLength="2" label="State" required="true"/>
			<t:bootstrap-text-input path="postalCode" maxLength="5" label="Postal Code" required="true"/>
<div class="col-lg-8">
				<form:button class="btn btn-primary btn-sm pull-right">Create</form:button></div>
</form:form>
</body>
</html>

user view /dev/users/{username}


<%@include file="/WEB-INF/jsp/shared/common-taglibs.jsp"%>
<!DOCTYPE html>
<html>
<head>
<title>Development User View</title>
</head>
<body id="users">
<div class="form-horizontal">
<h3>Development User</h3>
<t:bootstrap-text-input disabled="true" path="devUser.username"
			maxLength="255" label="Username" required="true" />
		<t:bootstrap-text-input disabled="true" path="devUser.description"
			maxLength="255" label="Description" required="true" />
		<t:bootstrap-text-input disabled="true" path="devUser.firstName"
			maxLength="255" label="First Name" required="true" />
		<t:bootstrap-text-input disabled="true" path="devUser.lastName"
			maxLength="255" label="Last Name" required="true" />
		<t:bootstrap-text-input disabled="true" path="devUser.email"
			maxLength="255" label="Email Address" required="true" />
		<t:bootstrap-text-input disabled="true" path="devUser.phoneNumber"
			maxLength="10" label="Phone Number" required="true" />
		<t:bootstrap-text-input disabled="true" path="devUser.phoneNumberExt"
			maxLength="4" label="Phone Number Extension" /></div>
<div>
<table class="table" id="user-crids">
<thead>
<tr>
<td>CRID</td>
<td>Company Name</td>
<td>Link Toggle</td>
</tr>
</thead>
<tbody>
				<c:forEach items="${devCrids}" var="crid">
<tr>
<td><a class="btn btn-primary btn-sm"
							href="<c:url value="/dev/crids/${crid.crid}"/>">${crid.crid}</a></td>
<td>${crid.companyName}</td>
<td><c:choose>
								<c:when test="${crid.achCrid.users.contains(devUser.user)}">
									<form:form servletRelativeAction="/dev/users/${devUser.username}/crids/${crid.crid}" method="DELETE"
										commandName="devCrid">
										<form:button class="btn btn-primary btn-sm">Unlink</form:button>
									</form:form>
								</c:when>
								<c:otherwise>
									<form:form servletRelativeAction="/dev/users/${devUser.username}/crids" method="POST"
										commandName="devCrid">
										<form:button class="btn btn-primary btn-sm">Link</form:button>
										<form:hidden path="crid" value="${crid.crid}" />
									</form:form>
								</c:otherwise>
							</c:choose></td>
</tr>
</c:forEach></tbody>
</table>
</div>
</body>
</html>

and all these views are wrapped in a sitemesh template


<%@include file="/WEB-INF/jsp/shared/common-taglibs.jsp"%>

<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1.0"
	http-equiv="Content-Type" content="text/html; charset=UTF-8">
<!-- Bootstrap -->
	<link href="<c:url value="/resources/css/bootstrap.min.css"/>"
	rel="stylesheet" media="screen">

<!-- HTML5 shim and Respond.js IE8 support of HTML5 elements and media queries -->
<!--[if lt IE 9]>
      <script src="../../assets/js/html5shiv.js"></script>
      <script src="../../assets/js/respond.min.js"></script>
    <![endif]-->

<decorator:head />
<title><decorator:title /></title>
</head>
<body>
<div class="container">
<div class="container">
<div id="header" class="col-lg-12">
				<page:applyDecorator name="dev-header">
					<page:param name="section">
						<decorator:getProperty property="body.id" />
					</page:param>
				</page:applyDecorator></div>
<div class="clearfix"></div>
<div class="col-lg-12">
				<decorator:body /></div>
</div>
</div>
<!-- jQuery (necessary for Bootstrap's JavaScript plugins) -->
	<script src="<c:url value="/resources/js/jquery-1.10.2.min.js"/>"></script>
	<!-- Include all compiled plugins (below), or include individual files as needed -->
	<script src="<c:url value="/resources/js/bootstrap.min.js"/>"></script>
	<script src="<c:url value="/resources/js/bootstrap-triggers.js"/>"></script>
</body>
</html>

and our sitemesh header for dev panel


<%@include file="/WEB-INF/jsp/shared/common-taglibs.jsp"%>
<c:set var="activeTab"><decorator:getProperty property="section"/></c:set>
<nav class="navbar navbar-default">
<div class="navbar-header">
		<button type="button" class="navbar-toggle" data-toggle="collapse"
			data-target=".navbar-ex1-collapse">
			<span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span>
			<span class="icon-bar"></span> <span class="icon-bar"></span>
		</button>
		<a href="<c:url value="/dev/login"/>" class="navbar-brand">Developer Panel</a></div>
<!-- Collect the nav links, forms, and other content for toggling -->
<div class="collapse navbar-collapse navbar-ex1-collapse">
<ul class="nav navbar-nav">
	<li ${activeTab eq 'login' ? 'class="active"' : '' }><a href="<c:url value="/dev/login"/>">Login</a></li>
	<li ${activeTab eq 'users' ? 'class="active"' : '' }><a href="<c:url value="/dev/users"/>">Users</a></li>
	<li ${activeTab eq 'crids' ? 'class="active"' : '' }><a href="<c:url value="/dev/crids"/>">CRIDs</a></li>
	<li ${activeTab eq 'permits' ? 'class="active"' : '' }><a href="<c:url value="/dev/permits"/>">Permits</a></li>
</ul>
</div>
<!-- /.navbar-collapse -->
</nav>

our common-taglibs file

<%-- common taglibs to include for jsps --%>
<%@ page language="java" contentType="text/html; charset=UTF-8" 	pageEncoding="UTF-8" isErrorPage="false"%>
<%@ taglib uri="http://www.springframework.org/security/tags" 	prefix="sec"%>
<%@ taglib uri="http://www.springframework.org/tags" prefix="spring"%>
<%@ taglib prefix="form" uri="http://www.springframework.org/tags/form"%>
<%@ taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c"%>
<%@ taglib uri="http://java.sun.com/jsp/jstl/fmt" prefix="fmt"%>
<%@ taglib uri="http://www.opensymphony.com/sitemesh/decorator" 	prefix="decorator"%>
<%@ taglib uri="http://www.opensymphony.com/sitemesh/page" prefix="page"%>
<%@ taglib prefix="fn" uri="http://java.sun.com/jsp/jstl/functions"%>
<%@taglib prefix="t" tagdir="/WEB-INF/tags" %>

and our custom bootstrap tags for text input

<%@tag
	description="Extended input tag to allow for sophisticated errors"
	pageEncoding="UTF-8"%>
<%@taglib prefix="spring" uri="http://www.springframework.org/tags"%>
<%@taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core"%>
<%@taglib prefix="form" uri="http://www.springframework.org/tags/form"%>
<%@taglib prefix="fn" uri="http://java.sun.com/jsp/jstl/functions"%>
<%@attribute name="path" required="true" type="java.lang.String"%>
<%@attribute name="labelCssClass" required="false"
	type="java.lang.String"%>
<%@attribute name="formCssClass" required="false"
	type="java.lang.String"%>
<%@attribute name="errorCssClass" required="false"
	type="java.lang.String"%>
<%@attribute name="label" required="true" type="java.lang.String"%>
<%@attribute name="maxLength" required="true" type="java.lang.Long"%>
<%@attribute name="placeholder" required="false" type="java.lang.String"%>
<%@attribute name="required" required="false" type="java.lang.Boolean"%>
<%@attribute name="disabled" required="false" type="java.lang.Boolean"%>
<%@attribute name="readonly" required="false" type="java.lang.Boolean"%>
<c:set var="errors"><form:errors path="${path}"/></c:set>
<div class="form-group ${not empty errors ? 'has-error' : '' }">
	<label
		class="control-label ${empty labelCssClass ? 'col-lg-2' : labelCssClass}"
		for="${path}">${label}<c:if test="${required}">
			<span class="text-danger">*</span>
		</c:if></label>
<div class="${empty formCssClass ? 'col-lg-6' : formCssClass }">
		<form:input path="${path}" disabled="${disabled}" cssClass="form-control" maxlength="${maxLength}" readonly="${readonly}" placeholder="${empty placeholder ? label : placeholder}" /></div>
<c:if test="${not empty errors}">
		<span class="text-danger ${empty errorCssClass ? 'col-lg-4' : errorCssClass }">${errors}</span>
	</c:if></div>

Bootstrap is twitters browser framework that give us a grid for easily making layouts, lots of base css classes for styling, and some dynamic javascript with plugin support as well. I’ve created these custom tags to allow for simple reusable and consistent form on our page. I won’t go too much into boostrap here but all classes you see in the given views and tags are provided by boostrap. I’ve written 0 custom css and javascript so far.

Of course there are some obvious things missing that we will cover in the future such as testing each layer with unit and integration tests and all the necessary configuration to actually get the frameworks in place but the main goal is getting used to developing end to end functionality using mutliple frameworks to handle all the non business stuff for us.

Spring Java Configuration in Servlet 3.x containers

As stated on the SpringSource site

Spring is the most popular application development framework for enterprise Java™. Millions of developers use Spring to create high performing, easily testable, reusable code without any lock-in.

We are going to take a look at getting Spring+Spring MVC setup in a servlet 3.x container such as Tomcat, WebSphere, Jboss or any of the many other JavaEE/Servlet servers.

Maven Dependencies

First thing we need to do is add Maven dependencies to our pom files.  We’ll be using a multi-module project as described in an earlier post.  We’ll have three pom files, one parent, one core, and one webapp.  In our Parent POM add the following artifacts to the dependencyManagement section


<dependency>
 <groupId>javax.servlet</groupId>
 <artifactId>javax.servlet-api</artifactId>
 <version>${servlet.version}</version>
 <scope>provided</scope>
 </dependency>
 <dependency>
 <groupId>javax.servlet</groupId>
 <artifactId>jstl</artifactId>
 <version>${jstl.version}</version>
 <scope>provided</scope>
 </dependency>
 <dependency>
 <groupId>javax.servlet.jsp</groupId>
 <artifactId>javax.servlet.jsp-api</artifactId>
 <version>${jsp.version}</version>
 <scope>provided</scope>
 </dependency>
 <dependency>
 <groupId>javax.el</groupId>
 <artifactId>javax.el-api</artifactId>
 <version>${el.version}</version>
 <scope>provided</scope>
 </dependency></pre>
<!-- Web application development utilities applicable to both Servlet
 and Portlet Environments (depends on spring-core, spring-beans, spring-context)
 Define this if you use Spring MVC, or wish to use Struts, JSF, or another
 web framework with Spring (org.springframework.web.*) -->
 <dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-web</artifactId>
 <version>${org.springframework.version}</version>
 </dependency>

<!-- Spring MVC for Servlet Environments (depends on spring-core, spring-beans,
 spring-context, spring-web) Define this if you use Spring MVC with a Servlet
 Container such as Apache Tomcat (org.springframework.web.servlet.*) -->
 <dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-webmvc</artifactId>
 <version>${org.springframework.version}</version>
 </dependency>

<!-- Application Context (depends on spring-core, spring-expression, spring-aop,
 spring-beans) This is the central artifact for Spring's Dependency Injection
 Container and is generally always defined -->
 <dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-context</artifactId>
 <version>${org.springframework.version}</version>
 <exclusions>
 <exclusion>
 <groupId>commons-logging</groupId>
 <artifactId>commons-logging</artifactId>
 </exclusion>
 </exclusions>
 </dependency>

<dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-core</artifactId>
 <version>${org.springframework.version}</version>
 <exclusions>
 <exclusion>
 <groupId>commons-logging</groupId>
 <artifactId>commons-logging</artifactId>
 </exclusion>
 </exclusions>
 </dependency>

<!-- Expression Language (depends on spring-core) Define this if you use
 Spring Expression APIs (org.springframework.expression.*) -->
 <dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-expression</artifactId>
 <version>${org.springframework.version}</version>
 </dependency>

<!-- Bean Factory and JavaBeans utilities (depends on spring-core) Define
 this if you use Spring Bean APIs (org.springframework.beans.*) -->
 <dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-beans</artifactId>
 <version>${org.springframework.version}</version>
 </dependency>

Note the provided scope on the servlet dependencies.  This means we are expecting our servlet container to already have these dependencies available to our application so we don’t need to package them up, but still need them for compiling.  The default scope for our other artifacts is compile which means they’ll be used to compiled and packaged up with our project

Also we are excluding commons-logging from spring core in favor of a slf4j implementation.

Next in the Parent POM add our versions under the properties section.  The servlet version should correspond to what your container provides and the spring dependencies should be the newest stable version that works with your servlet version.  These should work for 3.0.x containers.


<el.version>2.2.4</el.version>
 <jsp.version>2.2.1</jsp.version>
 <jstl.version>1.2</jstl.version>
 <org.springframework.version>3.2.4.RELEASE</org.springframework.version>
 <servlet.version>3.0.1</servlet.version>

Now we need to add the actual depenencies to our project modules for use.

<dependencies>
<dependency>
 <groupId>javax.servlet</groupId>
 <artifactId>javax.servlet-api</artifactId>
 </dependency>
 <dependency>
 <groupId>javax.servlet</groupId>
 <artifactId>jstl</artifactId>
 </dependency>
 <dependency>
 <groupId>javax.servlet.jsp</groupId>
 <artifactId>javax.servlet.jsp-api</artifactId>
 </dependency>
 <dependency>
 <groupId>javax.el</groupId>
 <artifactId>javax.el-api</artifactId>
 </dependency>
 <!-- Web application development utilities applicable to both Servlet and
 Portlet Environments (depends on spring-core, spring-beans, spring-context)
 Define this if you use Spring MVC, or wish to use Struts, JSF, or another
 web framework with Spring (org.springframework.web.*) -->
 <dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-web</artifactId>
 </dependency>

<!-- Spring MVC for Servlet Environments (depends on spring-core, spring-beans,
 spring-context, spring-web) Define this if you use Spring MVC with a Servlet
 Container such as Apache Tomcat (org.springframework.web.servlet.*) -->
 <dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-webmvc</artifactId>
 </dependency>
 <dependency>
 <groupId>${project.groupId}</groupId>
 <artifactId>core</artifactId>
 </dependency>
 </dependencies>

Adding this to the webapp pom give’s us our servlet and spring web/mvc dependencies as well as our core project module if you didn’t already add it.

 <dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-context</artifactId>
 </dependency>

<dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-core</artifactId>
 </dependency>

<dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-expression</artifactId>
 </dependency>

<dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-beans</artifactId>
 </dependency>

These pull the rest of the Spring dependencies into our core project which then get pull transtively to our webapp project.

Configuration

We are going to save why we use Spring, and other IoC containers for Java applications, for a later post and jump right into some configuration code.  Containers that implement the Java EE Servlet Spec 3.x enable us to skip the standard /WEB-INF/web.xml file in favor of simpler java class where we can pragmatically configure servlets.  So lets delete the web.xml file generated by our maven webapp archetype (keep the /WEB-INF/ folder) and look at how we use Java to register our servlets.

First just to stay, all the Spring documentation is very well written and complete. Looking at http://static.springsource.org/spring/docs/3.2.x/spring-framework-reference/html/new-in-3.1.html#new-in-3.1-servlet-3-code-config we see a reference to WebApplicationInitializer where I’ve slightly modified the code under the A 100% code-based approach to configuration


public class ServletConfig implements WebApplicationInitializer {

@Override
 public void onStartup(ServletContext servletContext)
 throws ServletException {

// Create the 'root' Spring application context and register our
 // MvcConfig spring class
 AnnotationConfigWebApplicationContext rootContext = new AnnotationConfigWebApplicationContext();
 rootContext.register(MvcConfig.class);

// Manage the lifecycle of the root application context
 servletContext.addListener(new ContextLoaderListener(rootContext));

// Create the dispatcher servlet's Spring application context
 AnnotationConfigWebApplicationContext dispatcherContext = new AnnotationConfigWebApplicationContext();

// Register and map the dispatcher servlet
 ServletRegistration.Dynamic dispatcher = servletContext.addServlet(
 "dispatcher", new DispatcherServlet(dispatcherContext));
 dispatcher.setLoadOnStartup(1);
 dispatcher.addMapping("/");
 }

}

You can create this class anywhere in your project and your servlet 3 container will pick it up and execute the code when you start up your application.  I’ve gone with root.package.path.config for all my servlet and spring configuration classes that aren’t specific to application business functionality.

How is Spring doing this with Servlet 3?

So this new configuration method is built on top of Servlet 3 dependency, so let’s take a quick look on how spring is achieving this auto detection and configuration.  First looks take a look at the SpringServletContainerInitializer class.  Here we see the class definition as


@HandlesTypes(WebApplicationInitializer.class)
public class SpringServletContainerInitializer implements ServletContainerInitializer

If you look at the Javadoc for ServletContainerInitializer you can see it says that there must be a file under META-INF/services which declares all implementations of the interface.  If we look at spring-web/META-INF/service/javax.servlet.ServletContainerInitializer file we see one entry org.springframework.web.SpringServletContainerInitializer

Now we know how spring registers it’s SpringServletContainerInitializer with the container let’s take a look at how it picks up our WebApplicationInitializer implementations.  You’ll notice @HandlesTypes(WebApplicationInitializer.class) annotation on SpringServletContainerInitializer.  This annotation tells the container to scan for all classes implementing that interface and pass them in as the first parameter in the method


public interface ServletContainerInitializer {

 public void onStartup(Set<Class<?>> c, ServletContext ctx)
 throws ServletException;
}

Which will then pass in all your WebApplicationInitializer implementations for Spring to invoke and handle the onStartup methods.

Finally Our Servlet Configuration

Now that we know how our container is loading our servlet information lets go back to our original configuration and take a look at the Spring classes we are registering.

We need to configure two things, Springs implementation of ServletContextListener which allows Spring to startup and shutdown with our application.  The second thing we need to register is Spring’s DispatcherServlet which will let us use Spring MVC which we’ll discuss more later.

Listener


// Create the 'root' Spring application context and register our
 // MvcConfig spring class
 AnnotationConfigWebApplicationContext rootContext = new AnnotationConfigWebApplicationContext();
 rootContext.register(MvcConfig.class);

// Manage the lifecycle of the root application context
 servletContext.addListener(new ContextLoaderListener(rootContext));

Serlvet


// Create the dispatcher servlet's Spring application context
 AnnotationConfigWebApplicationContext dispatcherContext = new AnnotationConfigWebApplicationContext();

// Register and map the dispatcher servlet
 ServletRegistration.Dynamic dispatcher = servletContext.addServlet(
 "dispatcher", new DispatcherServlet(dispatcherContext));
 dispatcher.setLoadOnStartup(1);
 dispatcher.addMapping("/");

As you may have notice we configure multiple AnnotationConfigWebApplicationContext, one for the listener and another for the servlet.  Springs AnnotationConfigWebApplicationContext is hierarchical so the root context belongs to the listener, and then the context for the servlet can override beans from the root context for that servlet. Since we are keeping our application very simple and will have no need for this functionality we only register our Spring @Configuration class MvcConfig.class in the root context.

In our DispatcherServlet we overrider the default servlet ‘/’ mapping so that Spring MVC can handle all our incoming request, note that this is different than mapping ‘/*’ which will cause problems later when using Spring MVC.

Let’s take a quick peak at our MvcConfig.class just to see how Spring is taking over at this point.


@Configuration
@EnableWebMvc
@ComponentScan(basePackages = "com.company.artifact")
// do a full scan of the root package to pickup annotationed classes
public class MvcConfig extends WebMvcConfigurerAdapter

The @Configuration annotation tells Spring that this is a configuration class (like the old .xml application context files).  For the  @ComponentScan annotation we pass in our root package name so that Spring will automatically scan and pickup any other annotated classes, including other @Configuration classes and beans.

That’s it

Now with very little code we have Spring hooked into our web application.

Creating an Empty Multi-Module Project with Maven and m2eclipse

Why Maven?

Maven is a popular, mature, open source build and project management tool that we’ll be using to manage our Java project.  It provides a lot of different functionality which can be expanded even more through creation of plugins.  Out of the box Maven gives us a few very important things.  First it will handle dependency management for our application that allows us to control dependency version and allows Maven to pull and package them for us.  Secondly it enforces a standard project folder layout that it will copy from when building our artifacts.  M2eclipse gives us the ability to create and manage Maven projects through our Eclipse IDE.

A typical Maven project folder layout looks like this

/src/main/java/each/package/part
/src/main/resources
/src/test/java/each/package/part
/src/test/resources

When we tell Maven to build our project it will compile (if necessary) and copy all the classes and resources under the /src/main folders.  The java folder is intended to hold our java source files.  It follows the convention of having a folder for each part of a packages path so com.mycompany.artifact.MyJavaClass will be stored at /src/main/java/com/mycompany/artifact/MyJavaClass.java on the file system.  At build time Maven will compile and copy java sources along with the folders into our artifacts /classes/ folder.  Maven will also directly copy any thing under /src/main/resources into the /classes/ folder as well.

Test folders follow the same conventions as the main code except that these classes will NOT be packaged in our artifact and instead will just be compiled and run during the build to verify all the code/test cases still pass.  This is an important part in our build process because it allows us to automatically compile and run our test and provide reporting and fail the build if tests fail.

One thing to understand about Maven is that it has an opinionated view on best practices for building and managing projects which a lot of people will cite when comparing it with other build tools that allow for more customization to the build process.  While some will say this is a negative, in reality it enforces a standard layout and allows for people to easily understand the build process of any project since Maven itself enforces its own build conventions.  Using older tools like Ant you can easily customize every stage of the build process to your liking, but this leads to inexperience developers creating large unmanageable build processes and requires that any new team members have to fully sit down and figure out how the build is configured versus just understand how Maven configures its builds.

Finally in the root of each Maven project there will be a POM file.  This is the main configuration file located in the root of all maven projects, including the parent and each module.

Why a Multi-Module Project instead of a standalone?

A single standalone project will contain all our business logic, web tier code, and all resources.  To make management of this code easier Maven allows us to create a hierarchical project tree where we can define standard project settings in our parent project and break out our business code into a separate project than our web tier code.

Lets look at example to understand why mutli-module projects are important.  You have a basic client facing web application that connects to a database and allows the user to update some records from their browser.  Now this project is complete and you decide you want to write another web application facing an internal helpdesk users that shares a lot of the same business code.  In a single module project you now will have to not only pull all the business logic code that you want, but also all the web tier code, which may be completely useless to your new web app or you might just end up copying all your business code into the new project and now need to maintain both sets.  You also have the issue where you need to copy and paste and maintain Maven dependencies and their versions in two project’s POM now instead of just one.

With a multi-module project we can solve the second issue by maintaining all our dependencies and versions in a parent project pom which will be inherited by our child projects.  That way we can continue to add as many child projects as we want and only need to update and maintain dependencies in one place.  We can address the first issue now but breaking our all our business logic into its own project under our Parent.  With our core business logic broken out we can easily declare this project as a dependency in each of the web applications we create.  Now both our internal and external projects will just declare the core business logic project as a dependency and we no longer need to worry about synchronizing changes in business logic across them.  All of this would also apply to having a project that needs a webapp and a batch application and many other cases.

One thing to understand with the Parent project is that it contains no code and does not produce a dependency.  It only provides common configuration across all the child projects and lets Maven know all the projects that are grouped and built together.  In our example case we would end up with two project artifacts, one for the internal and one for the external application, both packaging the core project artifact inside them.

Creating the Project

First thing we need to do is add the Maven Central Archetype catalog to our m2eclipse installation so that we can find the pom-root archetype.  Maven archetypes are just project skeletons that we can use to quickly startup a Maven project instead of creating all the folders ourselves.  In Eclipse open the Windows -> Preferences menu and select the Maven -> Archetypes settings.  Click Add Remote Catalog and enter http://repo1.maven.org/maven2/archetype-catalog.xml for the Catalog File and name it Maven Central.

Next right click the Project Explorer -> Other and select Maven -> Maven Project.  Hit next and when you reach the Select and Archetype menu select Maven Central from the drop down and search for pom-root and select it and hit next.  Finally populate the groupId and artifact name for your maven project and click finish and Maven will generate the Maven folders and a basic parent POM file.

Now that we have a parent Project we can start adding modules.  Again right click the Project Explorer -> Other but this type select Maven -> Maven Module.  Make sure the parent project you created is listed as the parent and give your new module a name.  Typically I name shared code/business logic core and the webapp as webapp with possible a prefix if there’s more than one webapp in the project. Click next and you can select the archetypes for your new modules, which I generally just choose maven-archetype-quickstart which will give you a very basic maven module.  For web projects you can also use the maven-archetype-webapp which will give you the same project layout as the quickstart but also add /webapp/WEB-INF/web.xml folder/file used for Java Servlet Apps and be configured to produce a war artifact.

Once you’ve done the above you should be able to add and launch the application from whichever server you have configured inside eclipse using the WTP, m2e and m2e-wtp plugins.  If you are missing any of the menus above or are unable to deploy your project then you might be missing those plugins.  Previously we setup and configured Eclipse to use WebSphere for our dev environment but the process should be very similar for setting up something like Tomcat or Jboss for those who aren’t required to deploy to the IBM server.

Dependency and Plugin Management

As we said our parent pom holds the configuration for all our plugins and dependencies to be pulled into our modules.  Let’s take a look at a few that we’ll need for most applications.


<dependencyManagement>

<dependencies>
 <dependency>
 <groupId>${project.groupId}</groupId>
 <artifactId>core</artifactId>
 <version>${project.version}</version>
 </dependency>

<dependency>
 <groupId>${project.groupId}</groupId>
 <artifactId>webapp</artifactId>
 <version>${project.version}</version>
 </dependency>
 </dependencies>

</dependencyManagement>

Theses are our child core and webapp modules we created.  The ${project.groupId} is a Maven variable for the projects groupId that we specificed and ${project.version} the projects current version.


<dependencies>
 <dependency>
 <groupId>${project.groupId}</groupId>
 <artifactId>core</artifactId>
 </dependency>

<dependencies>

Here we add the projects core module to the webapp pom so that we can access all the shared business code.


<build>
 <pluginManagement>
 <plugins>
 <plugin>
 <groupId>org.apache.maven.plugins</groupId>
 <artifactId>maven-compiler-plugin</artifactId>
 <version>${plugin-version.compiler}</version>
 <configuration>
 <source>${build.java.version}</source>
 <target>${build.java.version}</target>
 </configuration>
 </plugin>
 <plugin>
 <artifactId>maven-war-plugin</artifactId>
 <version>${plugin-version.war}</version>
 <configuration>
 <failOnMissingWebXml>false</failOnMissingWebXml>
 </configuration>
 </plugin>
 <plugin>
 <artifactId>maven-jar-plugin</artifactId>
 <version>${plugin-version.jar}</version>
 </plugin>
 <plugin>
 <artifactId>maven-ear-plugin</artifactId>
 <version>${plugin-version.ear}</version>
 </plugin>

</plugins>

</pluginManagement>

</build>

This configures what plugin version we’ll be using for our projects.  We are locking down our war/jar/ear plugin compiler versions here.


<properties>
 <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>

<build.java.version>1.7</build.java.version>

<plugin-version.compiler>3.1</plugin-version.compiler>

<plugin-version.ear>2.8</plugin-version.ear>
 <plugin-version.jar>2.4</plugin-version.jar>
 <plugin-version.war>2.4</plugin-version.war>
 </properties>

Here we can define replacement properties for use in our pom files using the ${} syntax.  Here we have our plugin version as well as defining our java build version to 1.7 and setting the text sourceEncoding to UTF-8.

Additional Info

We’ve shown how to create a very basic Maven project inside of eclipse, but can achieve the same results using the Maven command line as well. See http://maven.apache.org/guides/getting-started/ for more details.

Also we haven’t gone into the POM, Mavens configuration file, but instead just wanted to go over the very basics of what Maven provides us, and how/why we’d want mult-module projects.  To learn more about Maven and how to configure and use it see http://www.sonatype.com/resources/books for a great set of free ebooks which will cover everything you’d every want to know about using Maven.

Getting a Development Environment Setup with WebSphere 8.5 Liberty Profile and Eclipse 4.2

We’re ramping up a new project at work and we are required to deploy to WebSphere Application Server.  Luckily WebSphere 8.5 seems to be much more lightweight and much easier to use than its predecessors. Everything here will be done on a machine running Windows 7.

We are going to set up a basic development environment with Eclipse to give us Maven integration and WebSphere integration.

Installation

First make sure you have Java JDK 7 or higher installed.   You can check this by running this command


java -version

Next download IBM WebSphere 8.5.5 Liberty Profile. As described on the download page we just need to run the following command and pick a directory to install in.

java -jar wlp-developers-runtime-8.5.5.0.jar

Next download Eclipse 4.2 for JavaEE Developers and extract it into a folder. We are only using 4.2 here instead of the newest 4.3 due to the fact that the IBM WebSphere plugin for Eclipse is only supported up to 4.2. Start up Eclipse and select a workspace and then we’ll need to add a few plugins.

First we’ll need the IBM WebSphere 8.5.5 Developer Tools for Eclipse. You can install this by either searching the Help -> Eclipse Market Place or dragging the install button into the Eclipse from the IBM WebSphere 8.5.5 Liberty Profile download page. Install the plugins, accept any notifications and then Eclipse will restart.

In the Eclipse Server tab (ctrl+3 and search for server if you cannot find it) click the No Servers Available. Define a new server from the new server wizard.. link.  This will open the New Server dialog where you can select the the WebSphere Application Server V8.5 Liberty Profile  server under the IBM folder. If you don’t see the IBM folder or the server than you need to make sure you installed the necessary Eclipse plugin in the previous step.

Hit next and you’ll see the Liberty Profile Runtime Environment dialog. Browser to the folder where you installed WebSphere above and leave the default jre (which should be jre7 if you have the newest java correctly installed).  Click next and rename the server if you want. Note the info for the default HTTP End point and hit finish.

The server window should now have your new server, right click it and select start.  The server should now start up, you can see the debug information in the Console window. Open a web browser and navigate to http://localhost:9080 (or whatever you changed the end point too) and you should see the some information and link to other WebSphere resources.

Next we’ll want to installed the m2e-wtp plugin which will add Maven support the to Eclipse WTP Plugin.  Go to the Eclipse Market Place and search for m2e-wtp and select Maven Integration for Eclipse WTP (Juno) and let Eclipse restart after installing.

Create a Maven Project and Deploy to WebSphere

m2e-wtp installs the m2eclipse plugin which gives us a simple way to create new maven projects.  Right click your project tab and select New -> Other and under Maven select New Maven Project. Pick where you want the project created, select maven-archtype-webapp on the next screen and finally enter the groupId and artifactId for the project.

m2e will now create a very basic webapp project for you. In the server tab right click the WebSphere server and click Add and Remove… and then you should see the newly create project in the Available side, highlight your project and click the Add button to configure the project for the server.

Right click your server again and hit start.  Navigate to http://localhost:9080/appname where appname is what you see under the WebSphere server in Eclipse.  If you were successful with everything you should see a simple page that says “Hello World!”

Initializing a new repository using GitHub API, curl, and a tiny bit of bash script

Why not use the GitHub web interface?

We could easily setup our new repository using GitHub’s nice web interface, but the point of this blog is writing code and gaining experience with all the tools at our disposal.  Also we can create a quick bash script that automatically setups up a new repo and clones it into our current directory which should help make the process more seamless.

Installing curl

curl is command line tool for making http request (and many other protocols) which we will be using to make RESTful http calls to access GitHub’s API.

In your terminal run


sudo apt-get curl

Creating a GitHub Repository

Following the current GitHub API for creating a new repo at http://developer.github.com/v3/repos/#create we can produce a single command that will create our new repo and set some basic configuration options.


curl -u USERNAME https://api.github.com/user/repos -d "{\"name\":\"REPONAME\",\"description\":\"REPODESCRIPTION\",\"gitignore_template\":\"LANG\", \"auto_init\":true}"

Enter your GitHub password and curl will print the json message if the request was succesful.

The flags for used here for curl are pretty simple.  -u specifies the username for authentication and -d takes the data, json in this case, to send to the server.  In this example -d also defaults the request method to POST.  If you need to use a different request method you can use the -X flag to specific (ie -X DELETE).

Cloning the Respository

We need to get the newly created repository copied down to our local machine.  Git provides us with the clone command to pull remote repositories into a local directory.


git clone githubrepourl

Just run the above command replacing the githubrepourl with the url of your repository and it will be copied down into a new directory named the same as your repo.  You can again use the GitHub API to view  a list of all your repositories and get the clone url.  Run


curl -u username https://api.github.com/user/repos

and look at the clone_url field for the repository you want to clone.

Bringing it together with bash script

Finally we can combine the above into a single simple to use script.

#!/bin/bash

DESCRIPTION=
NAME=
LANG=
USER=

howto()
{
cat << EOF
$0 usage
This script creates and initializes a remote github repo and clones it into the CURRENTDIR/NAME
-n name of repo and folder to create in current directory
-d repo description
-u github username
-l programming language for .gitignore
EOF
}

while getopts "n:d:l:u:" OPTION
do
 case $OPTION in
 d) DESCRIPTION=$OPTARG ;;
 l) LANG=$OPTARG ;;
 u) USER=$OPTARG ;;
 n) NAME=$OPTARG ;;
 ?) howto; exit ;;
 esac
done
if [[ -z $DESCRIPTION ]] || [[ -z $LANG ]] || [[ -z $USER ]] || [[ -z $NAME ]]
then
 howto
 exit
fi

curl -u $USER https://api.github.com/user/repos -d "{\"name\":\"$NAME\", \"description\":\"$DESCRIPTION\",\"gitignore_template\":\"$LANG\", \"auto_init\":true}"
git clone https://github.com/$USER/$NAME.git

I am new to bash scripting, so lets take a look at a few important lines in the following script to understand what they do.

All linux scripts, including others than bash, require the first line the script to be a #! followed by the path to the binary the will accept the script as input and execute it.  The path to bash is /bin/bash.

Similar to C, we can define functions in bash scripts.  Their syntax goes functionname (params) { code }. Unlike C, we don’t need to specify any return or parameter types.

To parse command line parameters bash provides us with getopts which takes string where a single character maps to the a flag on the command line (‘n’  maps to -n) and if followed by an optional : then we are provided the string passed directly after the flag.  We then store each of the defined flag into a variable for later use and verify that they had values passed in by using the if statement and test [[ ]] with the -z flag.

Finally we take the curl and git clone commands we created and do some simple replacements with our variables.

Let’s copy our new script into /usr/local/bin/ and we can use it from the command line to create a new repository any time we need.

You can access the preceding code and any other bash scripts I write in the future at https://github.com/sgarlick987/bash-scripts/ which I created using the above script.

Source Control with git

Why Source Control?

This may seem like a silly question, but there’s plenty of organizations out there that are still using no source control and take some convincing.  A few of the larger benefits include

  1. Being able to see differences between old and new versions of code
  2. Able to roll back code to an earlier state
  3. Track and audit who made specific changes
  4. Develop code in isolated branches
  5. Easily merge code from multiple users and branches

There are many more benefits to using source control but even in a solo environment the ability to track changes is undeniably the most important aspect when developing applications.

Why git?

There are many different source control systems out there but git has a few important advantages over the competition.  We will be going with git mainly due to the fact that it has very good linux support, it’s free, it’s fast and it can be easily hosted on sites such as github and bitbucket.

git is also a distributed model so we don’t always need to have access to a central server and can commit and manage branches locally.  I’ve run into the issue at work where I don’t have access to our SVN repo when remote and have started using git-svn to handle local commits.

Check out http://git-scm.com/ to learn more about git and http://stackoverflow.com/questions/871/why-is-git-better-than-subversion if you want a much more in depth look at the differences between git and other systems (don’t mind the tone of the initial question, the responses are very good read if you’d like to know more).

Installing git

Open up your terminal and run


sudo apt-get install git

Continue when prompted and let it install.

We can set your user name and email that will be used to identify your commits by running


git config --global user.name "Stephen Garlick"

git config --global user.email "sgarlick987@gmail.com"

make sure the email matches the email you plan to use for your GitHub account and finally


git config --global credential.helper cache

which will let git cache your GitHub password in memory for 15 minutes.

Next time we’ll talk about initializing our git repo and getting it up on github along with a few basic commands to get us started.

Setting up a Development Virtual Machine

Why use a VM?

So why use a VM over doing a native install for development?

  1. Easy to backup and restore snapshots
  2. Easy to share with other developers
  3. Replicate different user environments

Just like everything else related to software its extremely important to be able to roll back changes to a specific point in time.  If you manage to completely hose your dev env its sometimes easier to just restore than to fix everything.

If you’re working on a project with other developers, which you will sooner or later, then its easy to get new developer up and running by passing them your vm instead of having them go through and install everything from scratch.  This can drastically speed up how soon your new dev can start contributing to the project.

Lastly you can easily replicate various user environments for testing (testing with various IE versions on windows vs some sort of IE test tool/emualtor)  which can easily communicate with your server instance.

Choosing an Operating System

Before we can do anything else we must choose an operating system to use for our development environment.  The big and obvious choices are Windows, Linux and OS X.  Currently on my main desktop I’m running Windows 7 because I mainly use it for video games and watching videos, but as  developer I have different needs.

We are going to go with Linux Mint.  There are a few reasons for this decision.

  1. Its free
  2. Its newbie friendly
  3. I need to improve my linux skills
  4. Its new

Installation

Download VMware Player and Linux Mint.  The current versions as of now are Linux Mint 15 64 bit and VMware Player 5.

Installing VMware Player on Windows 7 is easy enough as you just need to click through the dialog and let run.  Once it finishes installing run VMware follow the dialogs

  1. Create a New Virtual Machine.
  2. Select Linux and Ubuntu 64
  3. Name it Development Base and pick a location to store it
  4. Select single file and 20 gb space, we can raise this later if necessary
  5. Leave the default hardware settings, we can increase these later if necessary
  6. Hit Finish and then Play Virtual Machine
  7. You should get a popup asking  you to download VMware Tools for Linux 9.2.3 Select Download and Install
  8. Ignore the message at the bottom for now about the VMware tools

At this point you should be booted into a desktop of a live cd.  We’ll want  to run the Install Linux Mint icon and follow some more dialog.  Make your language and timezone selections and leave the defaults until the computer and user information screen.  On this screen your computer name is what will be used to identify this vm on your network and the user information will be used to create a user account.  Let the installation finish and restart.

Now that we’ve finished installing we can go back and install the VMware tools.   These tools enable us to use all the hardware for our host computer.  Click Player -> Manage -> Install VMware Tools and a drive will popup.

  1. Extract the contents of the compressed file to the desktop
  2. Open the folder in a terminal and run sudo ./vmware-install.pl
  3. Hit enter for all the default options
  4. Let the script finish executing and exit the prompt

We use the sudo command here to temporarily escalate our privileges to that of the root user to execute a perl script that finishes up the configuration.

And now we have a completely fresh Linux install in which to start configuring our development environment.