This software captures Customer’s Devices user input, associated metadata and screen images, sending this information via the Concentrator for analysis
Smart Profiling - Builds an up to the minute profile of activity per individual, allowing the risk profile and context of a situation to be accurately analyzed.
Text Analysis - Captures all text input via the keyboard, whether online or offline, allowing you to circumvent encrypted sites or apps.
Language Support - Able to monitor a range of language styles including slang, colloquialisms, local language, abbreviations, euphemisms and more.
24/7/365 Human Moderation - Content is monitored and reviewed by a team of moderators available around the clock to analyze instances and alert Safeguarding Officers of any high risk incidents.
Instant Alerting - Immediate alerts are based upon specific categories you identify as serious incidents, and will be immediately sent to your designated
Image Capture - Screen capture functionality sits within the solution, allowing any online and offline incidents that require investigation to be screen grabbed for later review or evidence.
Flexible Moderation - Profiles and policies are built based upon your own risk qualification criteria.
Artificial Intelligence - Uses machine learning to gather context before escalating for human moderation, improving performance and eliminating false positives.
No Impact on Performance - You will not experience any latency or notice the client is running at any time, including when taking screen captures.
To use Smoothwall Monitor, the first step is to install client software on the Windows or Mac device of the user that you want to protect.
Visigo client software can be downloaded directly from Smoothwall, and can be installed via group policy for simplicity across a school or organization. The user can’t see that the client software is present, even if the Visigo client is distributed by group policy while they are using the machine. The client will just work the next time the machine is booted.
Visigo updates are silent to the user; every time the device boots up it will check for, and install updates if available. Visigo is designed for use on desktop devices, currently incorporating Windows and Mac operating systems. It works with all hardware keyboards, as well as Bluetooth and virtual keyboards.
As the Visigo software is not visible on the machine to the user, and you hope not to receive an alert from us, we provide a Visigo Portal, that displays the status of the client on every machine it is installed on. This allows a System Administrator to identify and rectify if the client is not operating on an individual machine as expected.
Can we add our own keywords for alerts? This isn’t a function Visigo has, mainly because rather than being a simple key word logger, Visigo looks at words in their context, looks at patterns over time and not just the individual word. Some words will always trigger the system and be looked at by our team, however certain words/phrases which are only associated with risk in certain set ups and context will analysed within the context meaning we eliminate a lot of the false positives before they even reach our team. We are also able to pick up the development of new words being used on your devices which could pose a threat. For example recently we worked with a FE college where the students started to use a new term for cannabis that we hadn’t come across. By looking at the context around the new word the software can flag up that the meaning may have a risky association, we then have an awareness of this new term.
How long are what you deem false positives kept for, in case we need to go back and deal with something that actually wasn’t a false positive? We will only mark something as a false positive when it has been thoroughly checked by several members of our team, if we are in any doubt about an alert we will send the information through The aim is to reduce the workload on your staff, but also to protect your staff from potentially private information that is not relevant being seen by other members of your staff. So for example a staff member is writing an email which contains personal information but inadvertently triggers the software, however poses no risk and has clearly not written or done anything out of line. This is a typical case of where a false positive not only wastes the school’s time, but could also invade someone’s privacy by showing a colleague who they know personally information about that person that they don’t need to know and could be sensitive.
Does Visigo monitor Internet usage? Not specifically. We log everything that is typed (and what app it was typed in to) so we will record any URLs they type, but that's not quite the same thing. At some point in the future we'll definitely combine Visigo data and the logs from Smoothwall.
How are alerts generated and prioritised to enable a rapid response to immediate issues. What operational procedures are in place to facilitate that process? The Visigo solution uniquely builds a profile of each user, and each incident recorded against a profile receives a score. Risks are scored based on the level of perceived threat, harm or danger, and once the risk score becomes critical, the school or college is alerted. A critical score could be triggered by a series of incidents or one stand alone event. The procedure that follows will be based on the pre-agreed escalation process determined by the school or college, which can include an immediate phone call to the designated safeguarding contact. The risk scoring profile will be determined during the risk qualification review, ensuring that appropriate policies for the user groups are set.
How are false positives dealt with? Everything is kept. We keep all the text entry whether it is identified as being in a specific category/level or not
How long are alerts kept for? Data is kept as long as a student remains at that school. Customers can contact us to manually remove data before this date if required. A future portal update will add the ability to do this themselves.
Is Visigo intelligent enough to ignore certain things such as internet banking login? Yes. The information will go to the concentrator, the concentrator will realise that it’s a username (eg) and ignore this content.
Do we need a server on site for use with Visigo? No
How are you complying with requirement for the data to being stored on EU servers? Data is stored as per the EEA regs
Do we need a Smoothwall filtering solution to have Visigo? Not as standard but partnered together this will give you a robust monitoring / filtering safeguarding suite
How is Visigo installed? On Windows Visigo will be installed via GPO (Group Policy Object) and will be in an MSI format. Once installed the client is auto-updating and will ‘call home’ for updates on a daily basis. The client itself is small and lightweight with no prerequisites eg Java. It can be installed on Windows 7 onwards with both 32 and 64 bit support. On MacOS an unattended deployment is possible using Apple Remote Desktop or other similar tools. Support documentation is available for both Windows and MacOS deployments
What devices does Visigo cover? Visigo currently covers Macs & PC (Windows 7 upwards). Chromebooks to follow in the next release
How does Visigo work on personal devices? Ofcom regulations do not allow personal telephone devices to be monitored to this level. In a day school it can be problematic to request young people to put this level of monitoring on their devices and it requires their permission and consent as it would also work once the devices are off site.
Boarding schools where there is an extended duty of care and children are on site, can set into permissions and policies to put Visigo on devices.
Recommend shutting down most risky aspects of the internet down on the BYOD network, things such as social media, therefore encouraging these to be used on the school owned devices. Smoothwall has a good level of monitoring of web-traffic on BYOD which will fit statutory requirements for BYOD monitoring
Schools do not have the same level of responsibility for personal devices as they have for their own devices. Responsibility for their own devices extends to when these devices are off site.
Where does the monitoring taking place and who is checking it? All moderation is carried out by a UK based team. The organisation has been operating for 10 over years monitoring a variety of online platforms such as social media, chat, forums and application. Much of what they do as an organisation focuses on chat applications aimed at children and young people. As you can imagine this expertise is highly useful when monitoring alerts from schools. The organisation is SOC II compliant – they are also audited on a 3-monthly cycle to make sure this is the case. This ensures that data is held securely, legally and safely and the correct procedures to ensure this are in place and are being followed. Monitoring is 24/7.
How is the data being transferred? Without going into the technical details all data is encrypted and transferred securely. Our servers, where the data is processed, are also locked down and secure
What happens to material once it has been checked? Alerts are processed and sent to the school/college via the contact details they have provided and requested. False positives (data picked up by the software but not considered a viable alert) are stored. The school/college are able to request these but generally they are not sent through. All of the alerts and false positives are stored. If schools leave the service we will arrange for this data to be transferred to them.
Who has access to it? The moderators checking the data have access to it, the school have access to the alerts and the false positives (if requested). The data is not shared with anyone else.
What advice are you giving to schools prior to installation, not just technical but user engagement/consent? We advise that whilst we are not a safeguarding consultant (some organisations allow/encourage their employees to go by this title) and not in a position to advise. With that caveat, we recommend that they have a clear and robust acceptable usage policy in place and it may be worth reiterating this with staff, students and sometimes parents. If the service is going on any pupil/parent owned devices (as is want to happen in a boarding school for example) we make it clear they need explicit permission from the device owner to do so. Generally, schools have very clear ideas about how they want to go about this, some will go to the lengths of getting union clearance on the services while others will put the service in as covertly as possible.
How long do the moderators get to decide to use the screenshot from the time an alert is raised?
Technically, screenshots are kept in adherence with our data retention policy. The SLA guarantees a maximum response time of 30 minutes for an alert.