...
- Each test can be executed as a "standalone" binary in the same style we're currently using for test scripts
- Each test gets configured in the same style that we use for configuration of Eden itself:
- Test may have constants embedded in them, these constants can always be over-written by...
- ...a YAML file (this is NOT Eden's YAML configuration file – it is different, however we can use the same library from Eden to parse it), and YAML files in turn...
- ...gets over-written by CLI flags that can be passed to a test from the command line
E.g. a test called TestBaseImage may have a configuration knob called baseos.eve.tag with the default value (provide in the test's source code) of 'latest'. It can be further tweaked in a test configuration YAML file by setting TestBaseImage.baseos.eve.tag = YYY AND it can be further tweaked during execution by -baseos.eve.tag=XXX
- When a test starts it assumes (through the use of a common library and a TestContext) that:
- There's a controller that is already up and running. Tests themselves DO NOT engage in starting a controller.
- All the "EVE instances" are already running. Note than an EVE instance may be a physical device (like RaspberryPi) with EVE running, it maybe a Virtual instance of EVE running on a developer's laptop or it may be a Virtual instance of EVE running on a public cloud. Tests themselves DO NOT engage in starting any new EVE instances.
- EVE instances may or may NOT be "registered" with the controller.
- EVE instances may or may NOT be "on-boarded".
- Each test expects (through the use of a common library and a TestContext) that the following information is passed to it and gets recorded in the TestContext:
- URL for the controller (global.controller.url setting) in the form of [adam|zedcontrol]://[user:token@]address
- A list of EVE instance names (this may need to be UUIDs for now – since Adam doesn't do names → UUID mapping for us just yet)
E.g. thus a test may be called as:
eden.integration.test -test.run TestBaseImage -controller.url=adam://localhost:8080 -baseos.eve.tag=XXX eve1 eve2
This invocation provides our test with to EVE instances eve1 and eve2 (either symbolic names or UUIDs)
- Once the test starts a common library checks for what state eve instances are in and uses controller API calls to make them operational (e.g. for example an instance may need to be on-boarded first). For now it is fair to assume that all EVE instances are on-boarded.
- If running against zedcloud eve instances will have to be moved to a unique project identified by the following name: test-username-timestamp-UUID (this is irrelevant for Adam)
- TestContext will always hold reference to an array of EVE instances. For all the commands that do not specify which EVE instance to use the first one gets picked
- Test will refuse to execute if it is not given required number of EVE instances
After a test is executed, users wants to see a full trace of Info/Log/Metrics events deposited into 3 (for now) files for further inspection
Putting it all together, we can imagine the following pseudo-code of a TestReboot:
// This context holds all the configuration items in the same
// way that Eden context works: the commands line options override
// YAML settings. In addition to that, context is polymorphic in
// a sense that it abstracts away a particular controller (currently
// Adam and Zedcloud are supported)
tc *TestContext // TestContext is at least {
// controller *Controller
// project *Project
// nodes []EdgeNode
// ...
// }
// TestMain is used to provide setup and teardown for the rest of the
// tests. As part of setup we make sure that context has a slice of
// EVE instances that we can operate on. For any action, if the instance
// is not specified explicitly it is assumed to be the first one in the slice
func TestMain(m *testing.M) {
// this is expected to connect us to a desired controller
tc = NewTestContex(...)
// The following probably needs to be part of NewTestContext,
// I'm breaking it out to explain a few key things:
// Registering our own project namespace with controller for easy cleanup
tc.project = tc.controller.NewProject(GENERATED NAME BASED ON TestReboot + user + timestamp + uuid)
// Create representation of EVE instances (based on the names
// or UUIDs that were passed in) in the context. This is the first place
// where we're using zcli-like API:
for nodeName in range ... {
edgeNode := tc.controller.GetEdgeNode(nodeName)
if edgeNode == nil {
// Couldn't find existing edgeNode record in the controller.
// Need to create it from scratch now:
// this is modeled after: zcli edge-node create <name>
// --project=<project> --model=<model> [--title=<title>]
// ([--edge-node-certificate=<certificate>] |
// [--onboarding-certificate=<certificate>] |
// [(--onboarding-key=<key> --serial=<serial-number>)])
// [--network=<network>...]
//
// XXX: not sure if struct (giving us optional fields) would be better
edgeNode := tc.controller.NewEdgeNode(nodeName, tc.contr, ...)
} else {
// make sure to move EdgeNode to the project we created, again
// this is modeled after zcli edge-node update <name> [--title=<title>]
// [--lisp-mode=experimental|default] [--project=<project>]
// [--clear-onboarding-certs] [--config=<key:value>...] [--network=<network>...]
edgeNode.update(... project=XXX...)
}
// finally we need to make sure that the edgeNode is in a state that we need
// it to be, before the test can run -- this could be multiple checks on its
// status, but for example:
if edgeNode.GetStatus().state != registered {
// we may need to trasistion the node into registered state, or may be
// we just note that fact
}
// this is a good node -- lets add it to the test context
tc.addNode(edgeNode)
}
// we now have a situation where TestContext has enough EVE nodes known
// for the rest of the tests to run. So run them:
res := m.Run()
// Finally, we need to cleanup whatever objects may be in in the project we created
// and then we can exit
os.Exit(res)
}
func TestReboot(t *testing.T) {
// note that GetEdgeNode() without any argument is
// equivalent to the default (first one). Otherwise
// one can specify a name GetEdgeNode("foo")
edgeNode := tc.GetEdgeNode()
// this is modeled after: zcli edge-node reboot [-f] <name>
// this is expected to be a synchronous call for now
edgeNode.Reboot()
// this is how we make sure that the right event actually happens.
// Note that unlike previous call this is completely asynchronous.
// We expect AssertInfo method to return immediately and simply
// register a listener function that would check every incoming
// Info message and either exit with success on one of them OR
// exit with failure. However both of these events may happen minutes
// after the following call is made:
tc.AssertInfo("expected reboot to happen", func() {})
// now we're blocking until the time elapses or asserts fires
tc.WaitForAsserts(60) // this is guarantee to exit under 60 seconds
}