Symbiote

Engineer the web, together.

Symbiote news

Keep in touch with what we're up to.

RSS available too if that's your thing!

GUI apps with Docker

Marcus Nyeholt

Posted 27 Mar 2018

Blog

Docker is a thing here at Symbiote - we use it pretty extensively for our development environments, and are working on getting it out as a hosting option. While it's great for "server" apps, gui apps can tend to be a bit more difficult to get going.

I recently needed to get JMeter running again, and wanted a replicable environment for running the workbench application for putting together the tests that I could share with others. Thankfully, someone else has done the hard work of figuring out how to get things going - and after a couple of hoop jumps, have a functional Dockerfile below

# inspired by https://github.com/hauptmedia/docker-jmeter  and
# https://github.com/hhcordero/docker-jmeter-server/blob/master/Dockerfile
# and 
# https://github.com/fgrehm/docker-netbeans/blob/master/Dockerfile

# Docker run with 
# 
# 
# docker run --rm -it  \
#     --name jmeter \
#     -e DISPLAY=$DISPLAY \
#     -v /tmp/.X11-unix:/tmp/.X11-unix \
#     -v /tmp:/tmp \
#     -v $HOME/.Xauthority:/root/.Xauthority \
#     -w /tmp \
#     symbiote/jmeter:3.3
# 

FROM ubuntu:16.04

ARG JMETER_VERSION="3.3"
ENV JMETER_HOME /opt/apache-jmeter-${JMETER_VERSION}
ENV    JMETER_BIN  ${JMETER_HOME}/bin
ENV    JMETER_DOWNLOAD_URL  https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-${JMETER_VERSION}.tgz

ARG TZ="Australia/Melbourne"


RUN sed 's/main$/main universe/' -i /etc/apt/sources.list && \
    apt-get update && apt-get install -y software-properties-common && \
    add-apt-repository ppa:webupd8team/java -y && \
    apt-get update && \
    echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | /usr/bin/debconf-set-selections && \
    apt-get install -y oracle-java8-installer libxext-dev libxrender-dev libxtst-dev curl && \
    apt-get clean && \
    mkdir -p /tmp/dependencies && \
    curl -L --silent ${JMETER_DOWNLOAD_URL} >  /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz  && \ 
    mkdir -p /opt && \
    tar -xzf /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz -C /opt && \
    rm -rf /var/lib/apt/lists/* && \
    rm -rf /tmp/*

# TODO: plugins (later)
# && unzip -oq "/tmp/dependencies/JMeterPlugins-*.zip" -d $JMETER_HOME

# Set global PATH such that "jmeter" command is found
ENV PATH $PATH:$JMETER_BIN

# The magic bit that makes the .x11 mapping work out
RUN mkdir -p /home/developer && mkdir -p /etc/sudoers.d && \
    echo "developer:x:1000:1000:Developer,,,:/home/developer:/bin/bash" >> /etc/passwd && \
    echo "developer:x:1000:" >> /etc/group && \
    echo "developer ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/developer && \
    chmod 0440 /etc/sudoers.d/developer && \
    chown developer:developer -R /home/developer


WORKDIR    ${JMETER_HOME}

USER developer
ENV HOME /home/developer

CMD ${JMETER_BIN}/jmeter

Cleaning up after yourself

Marcus Nyeholt

Posted 6 Mar 2018

Blog

Ever taken a backup of a site database and gone to copy it somewhere, and been surprised how long it's taking to do so? Tried copying a site database locally to debug an issue and wonder why it fails restoring? 

We have a few modules we use regularly - Queued Jobs and Data Change Tracker are probably the most guilty - that can silently fill a database table with millions of records depending on your usage. Consider the following job

class RegularJob extends AbstractQueuedJob {


    public function process() {
        do_expensive_processing_on_records();

        $nextJob = new RegularJob();

        // run every five minutes except between midnight and 7am
        $nextTime = date('H') < 7 ? date('Y-m-d 07:00:00') : date('Y-m-d H:i:s', time() + 300);

        $this->queuedJobService->runJob($nextJob, nextTime);
    }    
}

Pretty straight-forward; every 2 minutes, run some expensive processing code unless it's between midnight and 7am. The side effect of this being that every day, there's over 500 records being added to the QueuedJobDescriptor table, on top of any other jobs running. Not a huge deal, but over a year this is over 180000 records added. With other processing jobs happening, in a very interactive site this can lead to a multi-million row table. 

DataChange Tracker is another that has this slow creep of content buildup, exacerbated by the fact that in its most aggressive mode, it will capture data object contents (before + after) and request variables. 

Solving it with ... more jobs.

Luckily, both these modules come with cleanup jobs to ease the process. 

CleanupJob - Removes all jobs that have completed and are older than 30 days (you can configure this value from .yml). This can be created from the Jobs admin section of the CMS. 

PruneChangesBeforeJob - Create via the Jobs admin section with a single constructor argument being a strtotime compatible string representing how long ago to prune data changes before. For example, "-1 month" will remove all tracked changes from before 1 month. We find that -3 months works pretty well here. 

Using previous search data

Marcus Nyeholt

Posted 29 Jan 2018

Blog

We're back! Most of us have taken a good break over the Christmas / New Year period, and just getting back into 5 day work weeks. To get back into things, we're looking at how search suggestions can be used from the Extensible Search module.

The module provides an out-of-the-box search page implementation that allows CMS users to configure how content is searched in a SilverStripe CMS website, rather than needing everything to be defined in code. One of its nifty features is the ability for it to record the searches executed in the site, which can be subsequently used to prompt users for known-good search.

While ExtensibleSearchPage will handle this for you, if you have your own search implementation you can still make use of the search suggestion capabilities by directly working with the API. Recording a search is as simple as

    singleton('ExtensibleSearchService')->logSearch(
            $term, $totalResults, $queryTime, 'SearchEngineType', $page->ID
        );

Under the covers, this leads to the recording of two bits of information;

  • An ExtensibleSearch object, which records the specific term and result count
  • If there were results for that term, a "suggested" search term object,

search analytics

The ExtensibleSearchSuggestion object records the total number of times a term has been searched on, and contains a flag indicating whether the term is an "approved" suggestion, allowing CMS authors the ability to indicate whether a term should be used on the site.

Making use of the suggestions is straightforward; for example, say you're wanting to display a list of the most frequently searched terms on your site as a prompt for users

$suggestions = ExtensibleSearchSuggestion::get()->filter('Approved', true)->sort('Frequency DESC')->limit(5);

and in your template

<ul>
    <% loop $Suggestions %>
    <li><a href="search?term=$Term.ATT">$Term</a></li>
    <% end_loop %>
</ul>

Alternatively, from the frontend of the site in an autocomplete scenario, you can call //site.com/extensible-search-api/getSuggestions?term={term}&amp;page={searchPageID} to retrieve suggestions that partially match the input term.

Note: If you have enabled search logging, you might want to look into using the ExtensibleSearchArchiveTask to ensure that you don't end up with several hundred thousand records of search history. This task ensures the history is periodically pruned of the raw records and keeps an archive of the terms/frequencies.

1 2 3 4 5 6

Page 2 of 6