So we talked about getting basic setup out of the way in the previous post. We didn’t attempt anything fancy – the goal was just to drive a little bit of load against Tableau Server.
In this entry, we’ll focus on getting into some of the fun settings you can play with. We’ll get data collection working in post three and talk more about reporting in the fourth and final entry.
Basic Configuration Settings for TabJolt
Have to admit, most of these are pretty straight forward. I haven’t actually played with proxy* settings, so I can’t speak much about them. That said, I turned proxying on and fired up Fiddler to see what happened. My server uses SSL, so I had to use Fiddler’s ability to decrypt HTTPS traffic.
Here’s Fiddler capturing all the traffic coming from TabJolt:
You can see the requests coming from TabJolt – java.exe (process #10616), so that’s cool. Set proxyEnabled back to false, and here’s the same test executing. Nothing in Fiddler, imagine that.
The vizDataSource property allows you to tell TabJolt what source it should use to build the list of vizzes it will execute. Specifying csv (the default) relies on what you’ve defined in vizpool.csv. On the other hand, web gives lets Tableau test any and all vizzes stored in your Tableau Server using an admin account that you must specify in ServerTestConfig.yaml.
I personally like sticking with csv – mainly because it gives me fine-grained control over the workload.
If you stick the path of the same viz in vizpool.csv several times, then TabJolt will execute it multiple times. This handy little behavior allows you to add heavier weighting to certain vizzes if you know they get executed more often by your users. A combination of vizpool.csv and what you see in the built-in Traffic to Views dashboard on VizPortal’s status page will allow you to build a workload that’s a very close approximation of what happens in your world. This is a good thing.
The forceLogin property is pretty well documented in the yaml file itself. In my earlier post, I mentioned setting this property to “true” if you get into a situation where you’re getting an unexplained 100% error condition. Not 100% sure why this works sometimes, but whatever.
The (new!) thinkTime settings allow you to add additional “pauses” in your viz executions to simulate real-world human behavior. This is a great thing to do, but keep in mind that doing so may make your results look a little hinky. Here’s what I mean: Let’s say I set thinktimeBetweenTest to 3000 ms and thinktimeBeforeInteraction to 1000 ms — I’m introducing up to 4 seconds of latency into each loop (note thinktimeBeforeInteraction generally introduces a fraction of the thinktime you define). We can clearly see this in the Test Response Time report that comes with TabJolt. Below, I’m executing the same viz over and over again with a single user for 2 minutes:
Now, let’s set both think time properties to 0 and try the exact same test again. Isn’t it interesting that the Interact Viz Test is now taking on average about 4 seconds less that it did before? And we’re hitting the exact same viz!
Here are the two tests compared in terms of average test respone time: about ~4 sec. vs. ~8 sec!
Starting to get my drift? The artificial latency that you add seems to be counted in the average response time – potentially making Tableau look slower than it actually is:
Here, I’ll add the same settings back, and run the same test a 3rd time…
The third test is right back up there. So…just be aware that your vizzes are not slower, you’re just seeing the think time reflected in the response time. Make sense? Good, cause Stats for Load Times agrees with my assessment…~3-4 seconds, not 8:
Moving along, insertOnlyParentSamples is a way to can save less data into your results database if you plan on running a really, really long test. It’ll save the “Parent” event in Postgres (like “InteractView Test” in Figure 1), but not Child test steps (like “BetweenTestThinkTime” and “BetweenInteractThinkTime”).
Ignore the reusePresModels setting if you happen to notice it. It doesn’t do anything…shouldn’t be in there.
Next up: Diving into the dreaded dataretriever.config file. Sounds scary!