I recently spent time building FortiAnalyzer reports to let management see which devices are spending the most time browsing non-work websites. I was really surprised how hard it was to find information on this topic. No default reports on the FortiAnalyzer gave the level of detail I wanted without running the User Detailed Browsing Log over and over for each device and scanning through thousands of logs. Ended up writing custom queries and doing it the hard way. My loss is your gain.
There are a few major caveats that I have to go through with you first:
FortiAnalyzer has no way of telling whether traffic logs are generated by a user or by a background process on the device. For example; if you see a device, let’s call it 192.168.100.28, making connections to a botnet in China, it is good odds that malware is doing the talking, not the user.
Without special agents configured, FortiAnalyzer has no way to tell which USER is logged on to a device. If you see 192.168.100.28 connecting to porn websites at night, you may want to verify who was actually sitting at the keyboard before going on a firing spree.
The “Requests” column really refers to the # of traffic logs generated. In my limited review, it seems like a new connect log is generated about once a minute during active browsing. So I use this to distinguish between a quick connect (for example, to download pictures or advertisements on a linked page) and a long browsing session. The custom reports are set to filter single requests, dramatically reducing the number of pages.
The “Bandwidth” column is exactly as it seems. If nothing else, goofing off on social media or YouTube does hog bandwidth from other legitimate users.
In my custom report, I filtered out categories that seem like normal work web browsing or data transmissions to/from vendors: Reference, Information Technology, Search Engines and Portals, Web Hosting, Business, Government and Legal Organizations, Information and Computer Security. I also filtered out Advertising because otherwise it is about half the report, and normally users don’t choose to view advertising on purpose.
Selfish plug time (sorry!)
I hope this article helps you (don’t worry, the next section has the FortiAnalyzer code you are seeking). If you have tips or feedback, please comment or send me an email so that others can benefit. I am a consultant in the Maryland/DC area in the USA. My specialties are Windows migrations (to 2016 and to Office 365 / Azure), VMware migrations, Netapp and SAN, and high availability / disaster recovery planning. If your business would like help with your complex project, or would like a architectural review to improve your availability, please reach out! More information and contact can be found on the About page. – Amira Armond
How to create the first custom FortiAnalyzer report “ALL USERS BY CATEGORY”:
Note: The code works well on FortiAnalyzer 5.4.3. If you have syntax problems on other versions, review the “Top Web Users by Allowed Requests” dataset to verify your table and column names.
Create a new dataset named “ALL USERS BY CATEGORY”
Log type = Traffic Query = select sum(minutes) as CountTimeStamps, user_src, catdesc, hostname as website, status, sum(bandwidth) as bandwidth from ###(select count(dtime) as minutes, coalesce(nullifna(`user`), nullifna(`unauthuser`), ipstr(`srcip`)) as user_src, catdesc, hostname, cast(utmaction as text) as status, sum(coalesce(sentbyte, 0)+coalesce(rcvdbyte, 0)) as bandwidth from $log-traffic where $filter and hostname is not null and logid_to_int(logid) not in (4, 7, 14) and (countweb>0 or ((logver is null or logver<52) and (hostname is not null or utmevent in (‘webfilter’, ‘banned-word’, ‘web-content’, ‘command-block’, ‘script-filter’)))) group by user_src, catdesc, hostname, utmaction)### t group by user_src, catdesc, website, status having sum(minutes) > 1 order by catdesc, CountTimeStamps DESC Apply…
Create a new Chart named “ALL USERS BY CATEGORY”
Select dataset = ALL USERS BY CATEGORY
Resolve hostname = Inherit
Chart type = table
(The columns should auto-populate)
Change counttimestamps to “Requests (minutes”) and width = 5%
Change user_src to “User/Source” and width = 14%
Change catdesc to “Category” and width = 20%
Change website to “Website” and width = 0%
Change bandwidth to “Bandwidth” and width = 6% and change the binding for this field to “Bandwidth (KB/MB/GB”
Order by = unchecked
Show Top (0 for all results) = 0 **Double check this one**
Apply…
Create a new report:
Create from Blank, named “ALL USERS BY CATEGORY” Go to Layout tab > Insert Chart >
Select the ALL USERS BY CATEGORY chart.
Title = Default
Width = 700
Filters = (Click + to add a filter)
Log Field = Category Description (catdesc)
Match Criteria = Not Equal To
Value = type “Advertising” and press Enter. Now add the rest of the categories, pressing enter between each one.
Advertising
Reference
Information Technology
Search Engines and Portals
Web Hosting
Business
Government and Legal Organizations
Information and Computer Security
Apply and run the report using the last 10 hours or so. You should get something like the picture at the top of this blog. Note: If you have more than 10,000 lines in the report, it will cut off. Report across fewer hours if this happens.
How to create the second custom report “ALL USERS BY USER ACTIVITY”:
Note: The code works well on FortiAnalyzer 5.4.3. If you have syntax problems on other versions…
This worm’s name was Welchia, and it came into the world late one night in mid-2003.
I was early career IT at that time, working Tier 2 Helpdesk in a very large (hundreds of thousands of users) enterprise. We had server farms across the world and I did normal things like resetting user profiles, fixing file shares, and repairing programs.
One morning, I was managing a server overseas when it abruptly went offline. I tried to reach the other servers at that site and none of them responded. Other employees around me confirmed that the network link was down.
About one minute later, I lost my connection to the server farm on the west coast. At this point, we started to get alarmed.
Then a few seconds later, server farms in the mid-USA went offline.
We are now in full panic mode. Is this World War III? Are we under attack?
And then two things happened simultaneously – our own computers lost connection to…
You increased the size of a datastore in the past, but now when you open vCenter, you see the old (smaller) size displayed. There may be low disk space warnings.
Web client for vCenter 6.5 and vSphere 6.5 and probably vCenter 6.0 and vSphere 6.0
If you refresh the datastore information, the correct size displays and the warnings go away temporarily.
The problem re-occurs.
Root Cause
According to the VMWare forums, this is caused by having different ESXi versions on the hosts in the datacenter. Such as one host has Update 1 and another host has Update 3. The recommended fix is to simply update all ESXi to the same version.
What happens if you can’t?
In my case, I needed to use a custom HP image for some servers, and I’m not going to take down the other hosts to install that custom image. So I kept trying things and found a good workaround.
Workaround
For EACH host in the datacenter that had a connection to the faulty storage, I performed these steps:
Had to troubleshoot a few HP DL360 servers recently during a vSphere upgrade.
They refused to upgrade to vSphere ESXi 6.5 with a conflicting_vibs_error
Symptoms:
After selecting “Upgrade ESXi and preserve the existing datastore”, the system scan presented the following error:
<CONFLICTING_VIBS ERROR: Vibs on the host are conflicting with vibs in metadata. Remove the conflicting vibs or use Image Builder to create a custom ISO providing newer versions of the conflicting vibs.
[‘qlogic_bookbank_scsi-qla2xxx_……..’
‘Emulex_bootbank_scsi-lpfc820_……..’
‘QLogic_bootbank_scsi-qla4xxx_……..’
‘VMware_bootbank_net-nx-nic_……..’
‘Intel_bootbank_net-ixgbe_……..’
‘Brocade_bootbank_scsi-bfa_……..’
At that point, I researched and found that this error is because the server was originally installed with a custom image from HP. Good news though, HP and VMWare have a new custom vSphere ESXi 6.5 image available for a host of HP servers, including the DL360p G8.
Note: This image works on the following models per the documentation… HPE Synergy 480 Gen 9, Synergy 620 Gen 9, Synergy 660 Gen 9, Synergy 680 Gen 9 | Moonshot m510, Moonshot m710x | Proliant Microserver Gen8 | BL460c Gen8, BL460c Gen9, BL465c Gen8, BL660c Gen8, BL660c Gen9 | DL20 Gen9, DL60 Gen9, DL80 Gen9, DL120 Gen9, DL160 Gen8, DL160 Gen9, DL180 Gen9, DL320e Gen8 v2, DL260 Gen9, DL360p Gen8, DL380 Gen9, DL380p Gen8, DL385p Gen8, DL560 Gen8, DL560 Gen9, DL580 Gen8, DL580 Gen9 | ML30 Gen9, ML350 Gen9, ML150 Gen9, ML110 Gen9 |
Also found that the iLO vib is a problem in that image (causes purple Screen of Death) – make sure you download the latest iLO vib and install it after upgrade.
Attempted again with the new custom image, SAME ERROR! But at this point, I knew that I could download any of the vibs after upgrading, so I removed them from the server using these steps:
Identify the vib short name by reviewing the error message. The text where the hyphens start is the vib short name (example in bold). ‘qlogic_bootbank_scsi-qla2xxx-934.5.20.0-10EM.500.0.0.472560′
Console into the ESXi server
Run command esxcli software vib remove -n <vibShortName>
Repeat for each offending vib
Attempt upgrade again (this time it should work)
After upgrade, verify that your hardware is working, all NICs are good, etc.
Make sure to install the iLO vib by copying it to the server’s /tmp/ directory (recommend using WinSCP for this), then run command esxcli software vib install -v /tmp/ilo_vibname.vib
Reboot the server to finish activating the iLO vib.
Hope this works for you as well as it worked for me. Good luck folks!
This is an inside joke for IT professionals. It refers to job security. If you are the only person who has the keys, your management will think twice before they fire you. With that level of job security, you can close the door and go back to playing computer games.
As a business owner, you need to stay in control of your computer system! The best way to do this is to demand documentation and account best practices from your computer staff.
Standard IT department documentation should have this information (at minimum):
A visual diagram of major systems (such as servers, network equipment) listing their purpose, how they are connected, and network addresses to administer them.
“Administrator” level usernames and passwords for each piece of network equipment, server operating system, and major application (such as your website and email).
Printouts or backups of configurations for each major program and equipment, so that if your IT guy gets run over by a bus (or quits suddenly), a new person can see how it should work.
Account best practices should be followed:
If a system allows you to create more than one administrator account, each IT staff should have their own account, named to identify the owner (example: jsmith).
Generic “administrator” or “root” accounts should not be used if named accounts are available.
Generic “administrator” or “root” accounts should have unique, complex passwords that few people know.
These two best practices are important for protecting yourself against criminal IT people as well as outside hackers.
Here are four reasons why:
Access logs can be reviewed periodically or after a security incident to see which account performed the action. Generic accounts always leave the question of “who was actually logged on?”
IT people are much less likely to do improper things if their personal account is associated with it.
If an IT person leaves the company, you can disable their personal accounts easily without hurting anything.
And you are more likely to notice if new administrator accounts are created, or there is unusual access by the “administrator” / “root” account (symptoms of outside hackers).
Mistake #1: After choosing an IT support provider, you no longer need to be involved in the management of them.
As with any department you manage, your IT support provider will need oversight as well as clear and consistent communication. When you stop being involved with your IT support provider, expectations are not clearly communicated and problems are more apt to arise. Additionally, without communication, your strategic, long-term plans may not be included in preparation for future technology spending.
Communication is a two-way street. If you are not involved in setting expectations or communicating questions as they arise, communication and, ultimately, the business relationship suffers. Clear communication is even more important when you have multiple technology vendors including specialized software, email hosting, network administration etc.
Mistake #2: Assume that there will not be anymore challenges or issues with your computers once you hire an IT support provider.
Many companies have hired IT support providers only to be disappointed that there are still things that seem to pop up on a regular basis. However, just because you have outsourced your IT support to a company with a good reputation and track record does not mean you won’t have any more challenges or issues ahead of you. It just isn’t realistic.
Computers are machines. User errors, hardware failure, software corruption and regular maintenance issues are the norm, not the exception, even for the best of networks and IT support providers. That is why large companies have whole IT departments. In fact you may even have more issues to address initially because someone is now actually paying attention to your needs and making you aware of them.
Mistake #3: Assume the IT support provider with the lowest price per hour is the best choice.
When choosing an IT support provider, you will have various options. As a small business, keeping costs down is important, so you may be tempted to settle for the lowest bidder. Just remember: You get what you pay for. If a company provides you a quote that is substantially lower than the others, ask why. Perhaps their employee(s) lack necessary certifications or training, they don’t have a staff or tools in place to ensure your needs are taken care of in a timely manner and that things don’t fall through the cracks. If this is the case, you may end up spending more in the short and long run by going with the lowest price per hour.
Mistake #4: Assume technical skill or “geekiness” is all you need.
How many times have your eyes glazed over as someone explains why you are experiencing various technical problems with your network. IT support providers who lack the ability to communicate in a non-technical manner and that don’t have a long term strategic view of your company’s business goals will cost you time and money while frustrating you and your employees.
Conclusion:
Whether you are looking for an IT support provider for the first time or are looking for a new one there are some things you can do now to avoid the mistakes many small companies make. If you can begin to view your IT support provider as an extension of your own company, you will be in good stead to avoid these common mistakes.
Like many other professional services that assist companies, the bottom line in looking for an IT support provider is finding a company with the right qualifications as well as the dedication to making sure you are taken care of to your satisfaction.
For the original article from Corporate Computer Services, click here:
Traditionally, if you needed a new server for your business, you bought an expensive piece of equipment, screwed it into a rack, turned it on, and installed the software.
Now we have cloud computing, which makes the server software independent from the equipment. You could have ten servers worth of software running on one piece of equipment, or you could have the server software running on equipment in someone else’s building!
As a business owner or CIO, you may find that cloud computing is perfect for your business. Here are some things to know:
There are two types of cloud computing; in-house and external.
In-house cloud computing is using a virtualization product like VMWare or Microsoft Hyper-V to separate your server software from the equipment. You can buy some high quality equipment and run a dozen servers on it. Cloud computing lets you manage it all centrally and do maintenance on the equipment without having to shut down servers.
External cloud computing is provided by some of the biggest names in technology: Amazon, IBM, Google, and Microsoft are a few examples. They have loads of server equipment in huge buildings across the world. When you create an account with them, you can reserve a specific amount of equipment (such as ten processors and 1,000 GB disk space) and then install your server software onto it. You will want to set up a connection from your business to your cloud provider (across the Internet), but otherwise the experience will be just like having the servers in your building.
Using cloud computing allows you to rapidly expand or contract your equipment use.
Since cloud computing separates the equipment from your servers, it makes it much easier to add more equipment, re-purpose it, or even remove it depending on your needs.
Cloud computing is one of the most efficient ways to have highly-available, redundant servers.
Cloud computing lets servers virtually “migrate” from one piece of equipment to another. This lets you maintain equipment or even suffer a hardware failure without impacting your servers.
Cloud computing saves you facilities costs for power, cooling, space, racks, and cabling.
If you use an external cloud provider, you let them deal with all of the facilities for hosting equipment. Even if you do cloud computing in-house, you will still see savings because you will be running less equipment than using traditional hardware.
Cloud computing lets you keep your IT staff trim
If you use an external cloud provider, your package may include installing software programs, checking backups, and running security patches. If you do cloud computing in-house, your IT staff will be able to maintain the systems more efficiently using virtualization software.
In the long-term, you will still pay for hardware and licensing costs
If you do the math for five years, you will find that cloud computing costs a little more than purchasing your own servers. But you save on human costs. Big providers like AWS have really good failover, redundancy, and networking capabilities that small and medium businesses just can’t reproduce.
Are you thinking about migrating to Office 365, or AWS, or Microsoft Azure?
You should! Our clients love their Office 365, and the cloud hosting by AWS and Microsoft Azure are top-notch. Companies that migrate to AWS or Microsoft hosting have great business continuity and disaster recovery capabilities without even trying.
Why choose us?
We thoroughly test our solutions, train your staff, and keep your network running..
Kieri Solutions is at our heart a systems engineering company. We are used to designing and implementing solutions for real companies that want Resilient IT. At the most basic, this means not breaking your network! Our senior engineers and systems architects are used to working on critical infrastructure such as nuclear plants, military, and healthcare. We are very careful. And we test to make sure everything works before we continue onward. This means less impact for your users and minimizing the cost to your business from lost work.
We are local, and will be available to support in the future.
When you fly in a consultant from a big name company, you will probably never meet that person again. In contrast, once we have performed a project for you, we stand by our work and will respond if you have problems later on. We will also remember you and your network – you won’t be starting from scratch with us.
Our rates are typically half that of a big-name company.
Since we don’t need to fly our employees around, and because we have a smaller footprint, we don’t need to charge crazy rates. We will be glad to give you a no-risk estimate.
Kieri Solutions provides systems consulting to businesses in Maryland: Frederick, Baltimore, Columbia, Rockville, and other cities in MD!
For more information, see these awesome articles from PCMag, CIO, and TechRepublic:
This article is meant for you as a voting citizen, rather than you as a home user or business owner. Nationally, for our own security, we need to make changes to the way our IT infrastructure is managed.
Proposal
Nationally, identify potential computer impacts in a tier structure. Each utility, large company, and government agency is expected to report this yearly. The idea is to measure the worst impact a complete information systems disruption could cause if it is unavailable for three weeks. Based on the tier, the U.S. government would enforce increasing security measures with the intention of reducing risk of disruption/attack.
This should be done per segment of the enterprise. For example, a power utility has a minimum of four segments: a) the actual power grid b) the customer-facing payment portal c) the internal network for communications and planning d) coordination communications to other power utilities. If these segments were taken down entirely, they would have vastly different impacts.
Tiers:
1st tier: Isolated economic hardship, alternatives exist. This is a service which can be temporarily ignored or bypassed. Think of customer facing payment portals, small and medium business outages.
Security requirement for 1st level: No requirement. Businesses will evaluate their own threat profile to determine whether high security is necessary.
2nd tier: Economic hardship, no alternatives. This is a service which can cause harm to our national economy or severe disruption to consumers if it is unavailable for more than a day. A cell phone network could fit this definition, or a complete outage of Amazon.
Security requirement for 2nd level: Security engagement and Continuity Of Operations Plan (COOP) is required. The COOP plan should describe a way to restore service from backups or other storage that would be unaffected by a malicious software attack on the main system.
3rd tier: Potential death toll of 1-10,000. This is a service which supports food/water, health, shelter, or emergency services, but which has readily available alternatives. An example would be a large pharmacy or hospital system – critical patients may not be able to transition in time but the majority of patients would survive. Or loss of 911 service. Or intersection lights becoming un-synchronized. Using some imagination, there are many information systems that could cause deaths.
Security requirement for 3rd level: Network separation or warm secondary system, active security review is required. Network separation means no communication between casual networks (such as the Internet or back-office network) and the critical system. This dramatically reduces the chance of widespread network based attacks and makes intentional hacking extremely difficult. For systems that cannot be separated, the alternative is to run warm secondary systems that can take over in case of outage. The warm systems should be designed to be resistant to the spread of network based attacks. It does no good to have a secondary system that is compromised at the same time as the primary system. Active security review means that the U.S. Government performs annual security audits against the network to find weaknesses, and requires that the organization fixes these weaknesses.
4th tier: Catastrophic death toll 10,000+. This is a service which supports food/water, health, shelter, or emergency services which does not have alternatives. Power grid or water failures fit into this category, especially if they can cascade across more than one local region. Outages which prevent our military/government from responding to national threats also count.
Security requirement for 4th level: Segmented network separation and second-set-of-eyes rule. Segmented network separation means that not only is there an air-gap between the critical system and casual networks, but wherever possible the critical system should be segmented into independent pieces too. For example, a power grid control network should be designed to be independently-operable in each geographic region. The idea is to prevent cascading outages like the one that cut off power to the entire NorthEast United States and part of Canada in 2003. Second-set-of-eyes rule is an extreme security measure which is intended to prevent all insider and outsider threats. Essentially, any time the critical system is altered (such as a patch or update), the package code should be reviewed by a minimum of two people for unintended functionality such as backdoors or time bombs. This rule requires the critical system to be very simple (think UNIX) and single-purpose, since it would be impossible to review the code on complex systems like Microsoft Servers.
Over the last five years, hard drive encryption has become mainstream for enterprise-size businesses.
I recommend encryption for any business or personal computer which holds critical information, but it is important to understand that it only protects computers that have been physically accessed – it doesn’t help against viruses or attacks against the software.
How it works
The hard drive is where your critical information is held. This information is stored in a logical, well defined matter, in data files that are stored just like the files you see when you open your C:\ drive. A very simple hack is to simply unscrew the target hard drive, plug it into the bad guy’s computer as a secondary drive, and access the files from D:\. This takes skills learned in the first year of a PC tech’s career. It bypasses your log-on passwords and any network security completely. This is what hard drive encryption protects against.
An encrypted hard drive is split into two sections – a very small decryptor section at the beginning, and the main storage section. The main storage section has the encrypted data, but not any tools to decrypt it. The decryptor section has the knowledge to decrypt, but it doesn’t know the key.
The key is provided by an outside source such as you or a server on your network.
Once the key is entered (when the computer boots up), the encryption software will keep using it (storing it temporarily in RAM memory) until the computer is powered off.
Automatic decryption once logged on is the reason why hard drive encryption is only a partial solution for security. Once the computer is turned on, the key is entered, and the operating system (such as Windows 10) is running, all files on the hard drive can be accessed across the network or by the logged on user. So all the normal security measures such as patches, antivirus, firewalls, and security policy are still necessary.
Bottom line: If there is any risk that your computer or hard drives could be physically stolen, invest in hard drive encryption.
There are several options ranging from workstation protection (Bitlocker, available with Pro versions of Windows 10), to extremely secure virtualization-compatible products like HyTrust DataControl for $400-$1000 per server. Work with a professional to make sure you have a good backup first, and a plan to troubleshoot issues. Losing the key or corrupting the encryption software can cause a loss of all data.
The article unfortunately doesn’t discuss defense tactics that the average small business can perform. This is where having a security-savvy IT person comes in handy.
Cybersecurity Defense Priorities for Small Business
Have good backups. Backups that are stored with an “air gap” between your business network and the storage media. Backups that are tested regularly to ensure that they can actually be restored.
Deny an entry. Make sure all internet-connected computers at your business have antivirus and the latest patches. Invest in an email filter. Configure firewalls to only allow “normal” traffic patterns, both inbound and outbound.
Separation. Use the principal of least-privilege so that even if something is accidentally executed, at least it will be executed as a user instead of an administrator. Individually firewall and protect your servers from the rest of the network.
Despite precautions, things can go wrong. Backups, good documentation, and a system administrator who can recover your network will give you options when responding. Don’t feed the wolves!