Evaluating the multi-threading countermeasure
- Frieslaar, Ibrahim, Irwin, Barry V W
- Authors: Frieslaar, Ibrahim , Irwin, Barry V W
- Date: 2016
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428352 , vital:72505 , https://researchspace.csir.co.za/dspace/bitstream/handle/10204/9041/Frieslaar_2016.pdf?sequence=1andisAllowed=y
- Description: This research investigates the resistance of the multi-threaded coun-termeasure to side channel analysis (SCA) attacks. The multi-threaded countermeasure is attacked using the Correlation Power Analysis (CPA) and template attacks. Additionally, it is compared to the existing hiding countermeasure. Furthermore, additional signal processing techniques are used to increase the attack success ratio. It is demon-strated that the multi-threaded countermeasure is able to outperform the existing countermeasures by withstanding the CPA and template at-tacks. Furthermore, the multi-threaded countermeasure is unaffected by the elastic alignment and filtering techniques as opposed to the existing countermeasures. The research concludes that the multithreaded coun-termeasure is indeed a secure implementation to mitigate SCA attacks.
- Full Text:
- Date Issued: 2016
- Authors: Frieslaar, Ibrahim , Irwin, Barry V W
- Date: 2016
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428352 , vital:72505 , https://researchspace.csir.co.za/dspace/bitstream/handle/10204/9041/Frieslaar_2016.pdf?sequence=1andisAllowed=y
- Description: This research investigates the resistance of the multi-threaded coun-termeasure to side channel analysis (SCA) attacks. The multi-threaded countermeasure is attacked using the Correlation Power Analysis (CPA) and template attacks. Additionally, it is compared to the existing hiding countermeasure. Furthermore, additional signal processing techniques are used to increase the attack success ratio. It is demon-strated that the multi-threaded countermeasure is able to outperform the existing countermeasures by withstanding the CPA and template at-tacks. Furthermore, the multi-threaded countermeasure is unaffected by the elastic alignment and filtering techniques as opposed to the existing countermeasures. The research concludes that the multithreaded coun-termeasure is indeed a secure implementation to mitigate SCA attacks.
- Full Text:
- Date Issued: 2016
The pattern-richness of graphical passwords
- Vorster, Johannes, Van Heerden, Renier, Irwin, Barry V W
- Authors: Vorster, Johannes , Van Heerden, Renier , Irwin, Barry V W
- Date: 2016
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/68322 , vital:29238 , https://doi.org/10.1109/ISSA.2016.7802931
- Description: Publisher version , Conventional (text-based) passwords have shown patterns such as variations on the username, or known passwords such as “password”, “admin” or “12345”. Patterns may similarly be detected in the use of Graphical passwords (GPs). The most significant such pattern - reported by many researchers - is hotspot clustering. This paper qualitatively analyses more than 200 graphical passwords for patterns other than the classically reported hotspots. The qualitative analysis finds that a significant percentage of passwords fall into a small set of patterns; patterns that can be used to form attack models against GPs. In counter action, these patterns can also be used to educate users so that future password selection is more secure. It is the hope that the outcome from this research will lead to improved behaviour and an enhancement in graphical password security.
- Full Text: false
- Date Issued: 2016
- Authors: Vorster, Johannes , Van Heerden, Renier , Irwin, Barry V W
- Date: 2016
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/68322 , vital:29238 , https://doi.org/10.1109/ISSA.2016.7802931
- Description: Publisher version , Conventional (text-based) passwords have shown patterns such as variations on the username, or known passwords such as “password”, “admin” or “12345”. Patterns may similarly be detected in the use of Graphical passwords (GPs). The most significant such pattern - reported by many researchers - is hotspot clustering. This paper qualitatively analyses more than 200 graphical passwords for patterns other than the classically reported hotspots. The qualitative analysis finds that a significant percentage of passwords fall into a small set of patterns; patterns that can be used to form attack models against GPs. In counter action, these patterns can also be used to educate users so that future password selection is more secure. It is the hope that the outcome from this research will lead to improved behaviour and an enhancement in graphical password security.
- Full Text: false
- Date Issued: 2016
DDoS Attack Mitigation Through Control of Inherent Charge Decay of Memory Implementations
- Herbert, Alan, Irwin, Barry V W, van Heerden, Renier P
- Authors: Herbert, Alan , Irwin, Barry V W , van Heerden, Renier P
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430339 , vital:72684 , https://www.academic-bookshop.com/ourshop/prod_3774091-ICCWS-2015-10th-International-Conference-on-Cyber-Warfare-and-Security-Kruger-National-Park-South-Africa-PRINT-ver-ISBN-978191030996.html
- Description: DDoS (Distributed Denial of Service) attacks over recent years have shown to be devastating on the target systems and services made publicly available over the Internet. Furthermore, the backscatter1 caused by DDoS attacks also affects the available bandwidth and responsiveness of many other hosts within the Internet. The unfortunate reality of these attacks is that the targeted party cannot fight back due to the presence of botnets and malware-driven hosts. These hosts that carry out the attack on a target are usually controlled remotely and the owner of the device is unaware of it; for this reason one cannot attack back directly as this will serve little more than to disable an innocent party. A proposed solution to these DDoS attacks is to identify a potential attacking address and ignore communication from that address for a set period of time through time stamping.
- Full Text:
- Date Issued: 2015
- Authors: Herbert, Alan , Irwin, Barry V W , van Heerden, Renier P
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430339 , vital:72684 , https://www.academic-bookshop.com/ourshop/prod_3774091-ICCWS-2015-10th-International-Conference-on-Cyber-Warfare-and-Security-Kruger-National-Park-South-Africa-PRINT-ver-ISBN-978191030996.html
- Description: DDoS (Distributed Denial of Service) attacks over recent years have shown to be devastating on the target systems and services made publicly available over the Internet. Furthermore, the backscatter1 caused by DDoS attacks also affects the available bandwidth and responsiveness of many other hosts within the Internet. The unfortunate reality of these attacks is that the targeted party cannot fight back due to the presence of botnets and malware-driven hosts. These hosts that carry out the attack on a target are usually controlled remotely and the owner of the device is unaware of it; for this reason one cannot attack back directly as this will serve little more than to disable an innocent party. A proposed solution to these DDoS attacks is to identify a potential attacking address and ignore communication from that address for a set period of time through time stamping.
- Full Text:
- Date Issued: 2015
Multi sensor national cyber security data fusion
- Swart, Ignus, Irwin, Barry V W, Grobler, Marthie
- Authors: Swart, Ignus , Irwin, Barry V W , Grobler, Marthie
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430393 , vital:72688 , https://www.academic-bookshop.com/ourshop/prod_3774091-ICCWS-2015-10th-International-Conference-on-Cyber-Warfare-and-Security-Kruger-National-Park-South-Africa-PRINT-ver-ISBN-978191030996.html
- Description: A proliferation of cyber security strategies have recently been published around the world with as many as thirty five strategies documented since 2009. These published strategies indicate the growing need to obtain a clear view of a country’s information security posture and to improve on it. The potential attack surface of a nation is extremely large however and no single source of cyber security data provides all the required information to accurately describe the cyber security readiness of a nation. There are however a variety of specialised data sources that are rich enough in relevant cyber security information to assess the state of a nation in at least key areas such as botnets, spam servers and incorrectly configured hosts present in a country. While informative both from an offensive and defensive point of view, the data sources range in a variety of factors such as accuracy, completeness, representation, cost and data availability. These factors add complexity when attempting to present a clear view of the combined intelligence of the data.
- Full Text:
- Date Issued: 2015
- Authors: Swart, Ignus , Irwin, Barry V W , Grobler, Marthie
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430393 , vital:72688 , https://www.academic-bookshop.com/ourshop/prod_3774091-ICCWS-2015-10th-International-Conference-on-Cyber-Warfare-and-Security-Kruger-National-Park-South-Africa-PRINT-ver-ISBN-978191030996.html
- Description: A proliferation of cyber security strategies have recently been published around the world with as many as thirty five strategies documented since 2009. These published strategies indicate the growing need to obtain a clear view of a country’s information security posture and to improve on it. The potential attack surface of a nation is extremely large however and no single source of cyber security data provides all the required information to accurately describe the cyber security readiness of a nation. There are however a variety of specialised data sources that are rich enough in relevant cyber security information to assess the state of a nation in at least key areas such as botnets, spam servers and incorrectly configured hosts present in a country. While informative both from an offensive and defensive point of view, the data sources range in a variety of factors such as accuracy, completeness, representation, cost and data availability. These factors add complexity when attempting to present a clear view of the combined intelligence of the data.
- Full Text:
- Date Issued: 2015
Observed correlations of unsolicited network traffic over five distinct IPv4 netblocks
- Nkhumeleni, Thiswilondi M, Irwin, Barry V W
- Authors: Nkhumeleni, Thiswilondi M , Irwin, Barry V W
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430408 , vital:72689 , https://www.academic-bookshop.com/ourshop/prod_3774091-ICCWS-2015-10th-International-Conference-on-Cyber-Warfare-and-Security-Kruger-National-Park-South-Africa-PRINT-ver-ISBN-978191030996.html
- Description: Using network telescopes to monitor unused IP address space provides a favorable environment for researchers to study and detect malware, denial of service and scanning activities within global IPv4 address space. This research focuses on comparative and correlation analysis of traffic activity across the network of telescope sensors. Analysis is done using data collected over a 12 month period on five network telescopes each with an aperture size of/24, operated in disjoint IPv4 address space. These were considered as two distinct groupings. Time series’ representing time-based traffic activity observed on these sensors was constructed. Using the cross-and auto-correlation methods of time series analysis, moderate correlation of traffic activity was achieved between telescope sensors in each category. Weak to moderate correlation was calculated when comparing category A and category B network telescopes’ datasets. Results were significantly improved by considering TCP traffic separately. Moderate to strong correlation coefficients in each category were calculated when using TCP traffic only. UDP traffic analysis showed weaker correlation between sensors, however the uniformity of ICMP traffic showed correlation of traffic activity across all sensors. The results confirmed the visual observation of traffic relativity in telescope sensors within the same category and quantitatively analyzed the correlation of network telescopes’ traffic activity.
- Full Text:
- Date Issued: 2015
- Authors: Nkhumeleni, Thiswilondi M , Irwin, Barry V W
- Date: 2015
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430408 , vital:72689 , https://www.academic-bookshop.com/ourshop/prod_3774091-ICCWS-2015-10th-International-Conference-on-Cyber-Warfare-and-Security-Kruger-National-Park-South-Africa-PRINT-ver-ISBN-978191030996.html
- Description: Using network telescopes to monitor unused IP address space provides a favorable environment for researchers to study and detect malware, denial of service and scanning activities within global IPv4 address space. This research focuses on comparative and correlation analysis of traffic activity across the network of telescope sensors. Analysis is done using data collected over a 12 month period on five network telescopes each with an aperture size of/24, operated in disjoint IPv4 address space. These were considered as two distinct groupings. Time series’ representing time-based traffic activity observed on these sensors was constructed. Using the cross-and auto-correlation methods of time series analysis, moderate correlation of traffic activity was achieved between telescope sensors in each category. Weak to moderate correlation was calculated when comparing category A and category B network telescopes’ datasets. Results were significantly improved by considering TCP traffic separately. Moderate to strong correlation coefficients in each category were calculated when using TCP traffic only. UDP traffic analysis showed weaker correlation between sensors, however the uniformity of ICMP traffic showed correlation of traffic activity across all sensors. The results confirmed the visual observation of traffic relativity in telescope sensors within the same category and quantitatively analyzed the correlation of network telescopes’ traffic activity.
- Full Text:
- Date Issued: 2015
On the viability of pro-active automated PII breach detection: A South African case study
- Swart, Ignus, Irwin, Barry V W, Grobler, Marthie
- Authors: Swart, Ignus , Irwin, Barry V W , Grobler, Marthie
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430235 , vital:72676 , https://doi.org/10.1145/2664591.2664600
- Description: Various reasons exist why certain types of information is deemed personal both by legislation and society. While crimes such as identity theft and impersonation have always been in existence, the rise of the internet and social media has exacerbated the problem. South Africa has recently joined the growing ranks of countries passing legislation to ensure the privacy of certain types of data. As is the case with most implemented security enforcement systems, most appointed privacy regulators operate in a reactive way. While this is a completely acceptable method of operation, it is not the most efficient. Research has shown that most data leaks containing personal information remains available for more than a month on average before being detected and reported. Quite often the data is discovered by a third party who selects to notify the responsible organisation but can just as easily copy the data and make use of it. This paper will display the potential benefit a privacy regulator can expect to see by implementing pro-active detection of electronic personally identifiable information (PII). Adopting pro-active detection of PII exposed on public networks can potentially contribute to a significant reduction in exposure time. The results discussed in this paper were obtained by means of experimentation on a custom created PII detection system.
- Full Text:
- Date Issued: 2014
- Authors: Swart, Ignus , Irwin, Barry V W , Grobler, Marthie
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/430235 , vital:72676 , https://doi.org/10.1145/2664591.2664600
- Description: Various reasons exist why certain types of information is deemed personal both by legislation and society. While crimes such as identity theft and impersonation have always been in existence, the rise of the internet and social media has exacerbated the problem. South Africa has recently joined the growing ranks of countries passing legislation to ensure the privacy of certain types of data. As is the case with most implemented security enforcement systems, most appointed privacy regulators operate in a reactive way. While this is a completely acceptable method of operation, it is not the most efficient. Research has shown that most data leaks containing personal information remains available for more than a month on average before being detected and reported. Quite often the data is discovered by a third party who selects to notify the responsible organisation but can just as easily copy the data and make use of it. This paper will display the potential benefit a privacy regulator can expect to see by implementing pro-active detection of electronic personally identifiable information (PII). Adopting pro-active detection of PII exposed on public networks can potentially contribute to a significant reduction in exposure time. The results discussed in this paper were obtained by means of experimentation on a custom created PII detection system.
- Full Text:
- Date Issued: 2014
Towards a Sandbox for the Deobfuscation and Dissection of PHP Malware
- Wrench, Peter M, Irwin, Barry V W
- Authors: Wrench, Peter M , Irwin, Barry V W
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429700 , vital:72633 , 10.1109/ISSA.2014.6950504
- Description: The creation and proliferation of PHP-based Remote Access Trojans (or web shells) used in both the compromise and post exploitation of web platforms has fuelled research into automated methods of dissecting and analysing these shells. Current malware tools disguise themselves by making use of obfuscation techniques designed to frustrate any efforts to dissect or reverse engineer the code. Advanced code engineering can even cause malware to behave differently if it detects that it is not running on the system for which it was originally targeted. To combat these defensive techniques, this paper presents a sandbox-based environment that aims to accurately mimic a vulnerable host and is capable of semi-automatic semantic dissection and syntactic deobfuscation of PHP code.
- Full Text:
- Date Issued: 2014
- Authors: Wrench, Peter M , Irwin, Barry V W
- Date: 2014
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429700 , vital:72633 , 10.1109/ISSA.2014.6950504
- Description: The creation and proliferation of PHP-based Remote Access Trojans (or web shells) used in both the compromise and post exploitation of web platforms has fuelled research into automated methods of dissecting and analysing these shells. Current malware tools disguise themselves by making use of obfuscation techniques designed to frustrate any efforts to dissect or reverse engineer the code. Advanced code engineering can even cause malware to behave differently if it detects that it is not running on the system for which it was originally targeted. To combat these defensive techniques, this paper presents a sandbox-based environment that aims to accurately mimic a vulnerable host and is capable of semi-automatic semantic dissection and syntactic deobfuscation of PHP code.
- Full Text:
- Date Issued: 2014
A baseline study of potentially malicious activity across five network telescopes
- Authors: Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429714 , vital:72634 , https://ieeexplore.ieee.org/abstract/document/6568378
- Description: +This paper explores the Internet Background Radiation (IBR) observed across five distinct network telescopes over a 15 month period. These network telescopes consisting of a /24 netblock each and are deployed in IP space administered by TENET, the tertiary education network in South Africa covering three numerically distant /8 network blocks. The differences and similarities in the observed network traffic are explored. Two anecdotal case studies are presented relating to the MS08-067 and MS12-020 vulnerabilities in the Microsoft Windows platforms. The first of these is related to the Conficker worm outbreak in 2008, and traffic targeting 445/tcp remains one of the top constituents of IBR as observed on the telescopes. The case of MS12-020 is of interest, as a long period of scanning activity targeting 3389/tcp, used by the Microsoft RDP service, was observed, with a significant drop on activity relating to the release of the security advisory and patch. Other areas of interest are highlighted, particularly where correlation in scanning activity was observed across the sensors. The paper concludes with some discussion on the application of network telescopes as part of a cyber-defence solution.
- Full Text:
- Date Issued: 2013
- Authors: Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429714 , vital:72634 , https://ieeexplore.ieee.org/abstract/document/6568378
- Description: +This paper explores the Internet Background Radiation (IBR) observed across five distinct network telescopes over a 15 month period. These network telescopes consisting of a /24 netblock each and are deployed in IP space administered by TENET, the tertiary education network in South Africa covering three numerically distant /8 network blocks. The differences and similarities in the observed network traffic are explored. Two anecdotal case studies are presented relating to the MS08-067 and MS12-020 vulnerabilities in the Microsoft Windows platforms. The first of these is related to the Conficker worm outbreak in 2008, and traffic targeting 445/tcp remains one of the top constituents of IBR as observed on the telescopes. The case of MS12-020 is of interest, as a long period of scanning activity targeting 3389/tcp, used by the Microsoft RDP service, was observed, with a significant drop on activity relating to the release of the security advisory and patch. Other areas of interest are highlighted, particularly where correlation in scanning activity was observed across the sensors. The paper concludes with some discussion on the application of network telescopes as part of a cyber-defence solution.
- Full Text:
- Date Issued: 2013
Developing a virtualised testbed environment in preparation for testing of network based attacks
- Van Heerden, Renier, Pieterse, Heloise, Burke, Ivan, Irwin, Barry V W
- Authors: Van Heerden, Renier , Pieterse, Heloise , Burke, Ivan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429648 , vital:72629 , 10.1109/ICASTech.2013.6707509
- Description: Computer network attacks are difficult to simulate due to the damage they may cause to live networks and the complexity required simulating a useful network. We constructed a virtualised network within a vSphereESXi environment which is able to simulate: thirty workstations, ten servers, three distinct network segments and the accompanying network traffic. The VSphere environment provided added benefits, such as the ability to pause, restart and snapshot virtual computers. These abilities enabled the authors to reset the simulation environment before each test and mitigated against the damage that an attack potentially inflicts on the test network. Without simulated network traffic, the virtualised network was too sterile. This resulted in any network event being a simple task to detect, making network traffic simulation a requirement for an event detection test bed. Five main kinds of traffic were simulated: Web browsing, File transfer, e-mail, version control and Intranet File traffic. The simulated traffic volumes were pseudo randomised to represent differing temporal patterns. By building a virtualised network with simulated traffic we were able to test IDS' and other network attack detection sensors in a much more realistic environment before moving it to a live network. The goal of this paper is to present a virtualised testbedenvironmentin which network attacks can safely be tested.
- Full Text:
- Date Issued: 2013
- Authors: Van Heerden, Renier , Pieterse, Heloise , Burke, Ivan , Irwin, Barry V W
- Date: 2013
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429648 , vital:72629 , 10.1109/ICASTech.2013.6707509
- Description: Computer network attacks are difficult to simulate due to the damage they may cause to live networks and the complexity required simulating a useful network. We constructed a virtualised network within a vSphereESXi environment which is able to simulate: thirty workstations, ten servers, three distinct network segments and the accompanying network traffic. The VSphere environment provided added benefits, such as the ability to pause, restart and snapshot virtual computers. These abilities enabled the authors to reset the simulation environment before each test and mitigated against the damage that an attack potentially inflicts on the test network. Without simulated network traffic, the virtualised network was too sterile. This resulted in any network event being a simple task to detect, making network traffic simulation a requirement for an event detection test bed. Five main kinds of traffic were simulated: Web browsing, File transfer, e-mail, version control and Intranet File traffic. The simulated traffic volumes were pseudo randomised to represent differing temporal patterns. By building a virtualised network with simulated traffic we were able to test IDS' and other network attack detection sensors in a much more realistic environment before moving it to a live network. The goal of this paper is to present a virtualised testbedenvironmentin which network attacks can safely be tested.
- Full Text:
- Date Issued: 2013
Visualization of a data leak
- Swart, Ignus, Grobler, Marthie, Irwin, Barry V W
- Authors: Swart, Ignus , Grobler, Marthie , Irwin, Barry V W
- Date: 2013
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428584 , vital:72522 , 10.1109/ISSA.2013.6641046
- Description: The potential impact that data leakage can have on a country, both on a national level as well as on an individual level, can be wide reaching and potentially catastrophic. In January 2013, several South African companies became the target of a hack attack, resulting in the breach of security measures and the leaking of a claimed 700000 records. The affected companies are spread across a number of domains, thus making the leak a very wide impact area. The aim of this paper is to analyze the data released from the South African breach and to visualize the extent of the loss by the companies affected. The value of this work lies in its connection to and interpretation of related South African legislation. The data extracted during the analysis is primarily personally identifiable information, such as defined by the Electronic Communications and Transactions Act of 2002 and the Protection of Personal Information Bill of 2009.
- Full Text:
- Date Issued: 2013
- Authors: Swart, Ignus , Grobler, Marthie , Irwin, Barry V W
- Date: 2013
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428584 , vital:72522 , 10.1109/ISSA.2013.6641046
- Description: The potential impact that data leakage can have on a country, both on a national level as well as on an individual level, can be wide reaching and potentially catastrophic. In January 2013, several South African companies became the target of a hack attack, resulting in the breach of security measures and the leaking of a claimed 700000 records. The affected companies are spread across a number of domains, thus making the leak a very wide impact area. The aim of this paper is to analyze the data released from the South African breach and to visualize the extent of the loss by the companies affected. The value of this work lies in its connection to and interpretation of related South African legislation. The data extracted during the analysis is primarily personally identifiable information, such as defined by the Electronic Communications and Transactions Act of 2002 and the Protection of Personal Information Bill of 2009.
- Full Text:
- Date Issued: 2013
A network telescope perspective of the Conficker outbreak
- Authors: Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429728 , vital:72635 , 10.1109/ISSA.2012.6320455
- Description: This paper discusses a dataset of some 16 million packets targeting port 445/tcp collected by a network telescope utilising a /24 netblock in South African IP address space. An initial overview of the collected data is provided. This is followed by a detailed analysis of the packet characteristics observed, including size and TTL. The peculiarities of the observed target selection and the results of the flaw in the Conficker worm's propagation algorithm are presented. An analysis of the 4 million observed source hosts is reported by grouped by both packet counts and the number of distinct hosts per network address block. Address blocks of size /8, 16 and 24 are used for groupings. The localisation, by geographic region and numerical proximity, of high ranking aggregate netblocks is highlighted. The paper concludes with some overall analyses, and consideration of the application of network telescopes to the monitoring of such outbreaks in the future.
- Full Text:
- Date Issued: 2012
- Authors: Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429728 , vital:72635 , 10.1109/ISSA.2012.6320455
- Description: This paper discusses a dataset of some 16 million packets targeting port 445/tcp collected by a network telescope utilising a /24 netblock in South African IP address space. An initial overview of the collected data is provided. This is followed by a detailed analysis of the packet characteristics observed, including size and TTL. The peculiarities of the observed target selection and the results of the flaw in the Conficker worm's propagation algorithm are presented. An analysis of the 4 million observed source hosts is reported by grouped by both packet counts and the number of distinct hosts per network address block. Address blocks of size /8, 16 and 24 are used for groupings. The localisation, by geographic region and numerical proximity, of high ranking aggregate netblocks is highlighted. The paper concludes with some overall analyses, and consideration of the application of network telescopes to the monitoring of such outbreaks in the future.
- Full Text:
- Date Issued: 2012
An Analysis and Implementation of Methods for High Speed Lexical Classification of Malicious URLs
- Egan, Shaun P, Irwin, Barry V W
- Authors: Egan, Shaun P , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429757 , vital:72637 , https://digifors.cs.up.ac.za/issa/2012/Proceedings/Research/58_ResearchInProgress.pdf
- Description: Several authors have put forward methods of using Artificial Neural Networks (ANN) to classify URLs as malicious or benign by using lexical features of those URLs. These methods have been compared to other methods of classification, such as blacklisting and spam filtering, and have been found to be as accurate. Early attempts proved to be as highly accurate. Fully featured classifications use lexical features as well as lookups to classify URLs and include (but are not limited to) blacklists, spam filters and reputation services. These classifiers are based on the Online Perceptron Model, using a single neuron as a linear combiner and used lexical features that rely on the presence (or lack thereof) of words belonging to a bag-of-words. Several obfuscation resistant features are also used to increase the positive classification rate of these perceptrons. Examples of these include URL length, number of directory traversals and length of arguments passed to the file within the URL. In this paper we describe how we implement the online perceptron model and methods that we used to try to increase the accuracy of this model through the use of hidden layers and training cost validation. We discuss our results in relation to those of other papers, as well as other analysis performed on the training data and the neural networks themselves to best understand why they are so effective. Also described will be the proposed model for developing these Neural Networks, how to implement them in the real world through the use of browser extensions, proxy plugins and spam filters for mail servers, and our current implementation. Finally, work that is still in progress will be described. This work includes other methods of increasing accuracy through the use of modern training techniques and testing in a real world environment.
- Full Text:
- Date Issued: 2012
- Authors: Egan, Shaun P , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429757 , vital:72637 , https://digifors.cs.up.ac.za/issa/2012/Proceedings/Research/58_ResearchInProgress.pdf
- Description: Several authors have put forward methods of using Artificial Neural Networks (ANN) to classify URLs as malicious or benign by using lexical features of those URLs. These methods have been compared to other methods of classification, such as blacklisting and spam filtering, and have been found to be as accurate. Early attempts proved to be as highly accurate. Fully featured classifications use lexical features as well as lookups to classify URLs and include (but are not limited to) blacklists, spam filters and reputation services. These classifiers are based on the Online Perceptron Model, using a single neuron as a linear combiner and used lexical features that rely on the presence (or lack thereof) of words belonging to a bag-of-words. Several obfuscation resistant features are also used to increase the positive classification rate of these perceptrons. Examples of these include URL length, number of directory traversals and length of arguments passed to the file within the URL. In this paper we describe how we implement the online perceptron model and methods that we used to try to increase the accuracy of this model through the use of hidden layers and training cost validation. We discuss our results in relation to those of other papers, as well as other analysis performed on the training data and the neural networks themselves to best understand why they are so effective. Also described will be the proposed model for developing these Neural Networks, how to implement them in the real world through the use of browser extensions, proxy plugins and spam filters for mail servers, and our current implementation. Finally, work that is still in progress will be described. This work includes other methods of increasing accuracy through the use of modern training techniques and testing in a real world environment.
- Full Text:
- Date Issued: 2012
Building a Graphical Fuzzing Framework
- Zeisberger, Sascha, Irwin, Barry V W
- Authors: Zeisberger, Sascha , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429772 , vital:72638 , https://digifors.cs.up.ac.za/issa/2012/Proceedings/Research/59_ResearchInProgress.pdf
- Description: Fuzz testing is a robustness testing technique that sends malformed data to an application’s input. This is to test an application’s behaviour when presented with input beyond its specification. The main difference between traditional testing techniques and fuzz testing is that in most traditional techniques an application is tested according to a specification and rated on how well the application conforms to that specification. Fuzz testing tests beyond the scope of a specification by intelligently generating values that may be interpreted by an application in an unintended manner. The use of fuzz testing has been more prevalent in academic and security communities despite showing success in production environments. To measure the effectiveness of fuzz testing, an experiment was conducted where several publicly available applications were fuzzed. In some instances, fuzz testing was able to force an application into an invalid state and it was concluded that fuzz testing is a relevant testing technique that could assist in developing more robust applications. This success prompted a further investigation into fuzz testing in order to compile a list of requirements that makes an effective fuzzer. The aforementioned investigation assisted in the design of a fuzz testing framework, the goal of which is to make the process more accessible to users outside of an academic and security environment. Design methodologies and justifications of said framework are discussed, focusing on the graphical user interface components as this aspect of the framework is used to increase the usability of the framework.
- Full Text:
- Date Issued: 2012
- Authors: Zeisberger, Sascha , Irwin, Barry V W
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429772 , vital:72638 , https://digifors.cs.up.ac.za/issa/2012/Proceedings/Research/59_ResearchInProgress.pdf
- Description: Fuzz testing is a robustness testing technique that sends malformed data to an application’s input. This is to test an application’s behaviour when presented with input beyond its specification. The main difference between traditional testing techniques and fuzz testing is that in most traditional techniques an application is tested according to a specification and rated on how well the application conforms to that specification. Fuzz testing tests beyond the scope of a specification by intelligently generating values that may be interpreted by an application in an unintended manner. The use of fuzz testing has been more prevalent in academic and security communities despite showing success in production environments. To measure the effectiveness of fuzz testing, an experiment was conducted where several publicly available applications were fuzzed. In some instances, fuzz testing was able to force an application into an invalid state and it was concluded that fuzz testing is a relevant testing technique that could assist in developing more robust applications. This success prompted a further investigation into fuzz testing in order to compile a list of requirements that makes an effective fuzzer. The aforementioned investigation assisted in the design of a fuzz testing framework, the goal of which is to make the process more accessible to users outside of an academic and security environment. Design methodologies and justifications of said framework are discussed, focusing on the graphical user interface components as this aspect of the framework is used to increase the usability of the framework.
- Full Text:
- Date Issued: 2012
Remote fingerprinting and multisensor data fusion
- Hunter, Samuel O, Stalmans, Etienne, Irwin, Barry V W, Richter, John
- Authors: Hunter, Samuel O , Stalmans, Etienne , Irwin, Barry V W , Richter, John
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429813 , vital:72641 , 10.1109/ISSA.2012.6320449
- Description: Network fingerprinting is the technique by which a device or service is enumerated in order to determine the hardware, software or application characteristics of a targeted attribute. Although fingerprinting can be achieved by a variety of means, the most common technique is the extraction of characteristics from an entity and the correlation thereof against known signatures for verification. In this paper we identify multiple host-defining metrics and propose a process of unique host tracking through the use of two novel fingerprinting techniques. We then illustrate the application of host fingerprinting and tracking for increasing situational awareness of potentially malicious hosts. In order to achieve this we provide an outline of an adapted multisensor data fusion model with the goal of increasing situational awareness through observation of unsolicited network traffic.
- Full Text:
- Date Issued: 2012
- Authors: Hunter, Samuel O , Stalmans, Etienne , Irwin, Barry V W , Richter, John
- Date: 2012
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429813 , vital:72641 , 10.1109/ISSA.2012.6320449
- Description: Network fingerprinting is the technique by which a device or service is enumerated in order to determine the hardware, software or application characteristics of a targeted attribute. Although fingerprinting can be achieved by a variety of means, the most common technique is the extraction of characteristics from an entity and the correlation thereof against known signatures for verification. In this paper we identify multiple host-defining metrics and propose a process of unique host tracking through the use of two novel fingerprinting techniques. We then illustrate the application of host fingerprinting and tracking for increasing situational awareness of potentially malicious hosts. In order to achieve this we provide an outline of an adapted multisensor data fusion model with the goal of increasing situational awareness through observation of unsolicited network traffic.
- Full Text:
- Date Issued: 2012
Social recruiting: a next generation social engineering attack
- Schoeman, A H B, Irwin, Barry V W
- Authors: Schoeman, A H B , Irwin, Barry V W
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428600 , vital:72523 , https://www.jstor.org/stable/26486876
- Description: Social engineering attacks initially experienced success due to the lack of understanding of the attack vector and resultant lack of remedial actions. Due to an increase in media coverage corporate bodies have begun to defend their interests from this vector. This has resulted in a new generation of social engineering attacks that have adapted to the industry response. These new forms of attack take into account the increased likelihood that they will be detected; rendering traditional defences against social engineering attacks moot. This paper highlights these attacks and will explain why traditional defences fail to address them as well as suggest new methods of incident response.
- Full Text:
- Date Issued: 2012
- Authors: Schoeman, A H B , Irwin, Barry V W
- Date: 2012
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428600 , vital:72523 , https://www.jstor.org/stable/26486876
- Description: Social engineering attacks initially experienced success due to the lack of understanding of the attack vector and resultant lack of remedial actions. Due to an increase in media coverage corporate bodies have begun to defend their interests from this vector. This has resulted in a new generation of social engineering attacks that have adapted to the industry response. These new forms of attack take into account the increased likelihood that they will be detected; rendering traditional defences against social engineering attacks moot. This paper highlights these attacks and will explain why traditional defences fail to address them as well as suggest new methods of incident response.
- Full Text:
- Date Issued: 2012
A framework for DNS based detection and mitigation of malware infections on a network
- Stalmans, Etienne, Irwin, Barry V W
- Authors: Stalmans, Etienne , Irwin, Barry V W
- Date: 2011
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429827 , vital:72642 , 10.1109/ISSA.2011.6027531
- Description: Modern botnet trends have lead to the use of IP and domain fast-fluxing to avoid detection and increase resilience. These techniques bypass traditional detection systems such as blacklists and intrusion detection systems. The Domain Name Service (DNS) is one of the most prevalent protocols on modern networks and is essential for the correct operation of many network activities, including botnet activity. For this reason DNS forms the ideal candidate for monitoring, detecting and mit-igating botnet activity. In this paper a system placed at the network edge is developed with the capability to detect fast-flux domains using DNS queries. Multiple domain features were examined to determine which would be most effective in the classification of domains. This is achieved using a C5.0 decision tree classifier and Bayesian statistics, with positive samples being labeled as potentially malicious and nega-tive samples as legitimate domains. The system detects malicious do-main names with a high degree of accuracy, minimising the need for blacklists. Statistical methods, namely Naive Bayesian, Bayesian, Total Variation distance and Probability distribution are applied to detect mali-cious domain names. The detection techniques are tested against sample traffic and it is shown that malicious traffic can be detected with low false positive rates.
- Full Text:
- Date Issued: 2011
- Authors: Stalmans, Etienne , Irwin, Barry V W
- Date: 2011
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429827 , vital:72642 , 10.1109/ISSA.2011.6027531
- Description: Modern botnet trends have lead to the use of IP and domain fast-fluxing to avoid detection and increase resilience. These techniques bypass traditional detection systems such as blacklists and intrusion detection systems. The Domain Name Service (DNS) is one of the most prevalent protocols on modern networks and is essential for the correct operation of many network activities, including botnet activity. For this reason DNS forms the ideal candidate for monitoring, detecting and mit-igating botnet activity. In this paper a system placed at the network edge is developed with the capability to detect fast-flux domains using DNS queries. Multiple domain features were examined to determine which would be most effective in the classification of domains. This is achieved using a C5.0 decision tree classifier and Bayesian statistics, with positive samples being labeled as potentially malicious and nega-tive samples as legitimate domains. The system detects malicious do-main names with a high degree of accuracy, minimising the need for blacklists. Statistical methods, namely Naive Bayesian, Bayesian, Total Variation distance and Probability distribution are applied to detect mali-cious domain names. The detection techniques are tested against sample traffic and it is shown that malicious traffic can be detected with low false positive rates.
- Full Text:
- Date Issued: 2011
Tartarus: A honeypot based malware tracking and mitigation framework
- Hunter, Samuel O, Irwin, Barry V W
- Authors: Hunter, Samuel O , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428629 , vital:72525 , https://d1wqtxts1xzle7.cloudfront.net/96055420/Hunter-libre.pdf?1671479103=andresponse-content-disposi-tion=inline%3B+filename%3DTartarus_A_honeypot_based_malware_tracki.pdfandExpires=1714722666andSignature=JtPpR-IoAXILqsIJSlmCEvn6yyytE17YLQBeFJRKD5aBug-EbLxFpEGDf4GtQXHbxHvR4~E-b5QtMs1H6ruSYDti9fIHenRbLeepZTx9jYj92to3qZjy7UloigYbQuw0Y6sN95jI7d4HX-Xkspbz0~DsnzwFmLGopg7j9RZSHqpSpI~fBvlml3QQ2rLCm4aB9u8tSW8du5u~FiJgiLHNgJaPzEOzy4~yfKkXBh--LTFdgeAVYxQbOESGGh9k5bc-LDJhQ6dD5HpXsM3wKJvYuVyU6m83vT2scogVgKHIr-t~XuiqL35PfI3hs2c~ZO0TH4hCqwiNMHQ8GCYsLvllsA__andKey-Pair-Id=APKAJLOHF5GGSLRBV4ZA
- Description: On a daily basis many of the hosts connected to the Internet experi-ence continuous probing and attack from malicious entities. Detection and defence from these malicious entities has primarily been the con-cern of Intrusion Detection Systems, Intrusion Prevention Systems and Anti-Virus software. These systems rely heavily on known signatures to detect nefarious traffic. Due to the reliance on known malicious signa-tures, these systems have been at a serious disadvantage when it comes to detecting new, never before seen malware. This paper will introduce Tartarus which is a malware tracking and mitigation frame-work that makes use of honeypot technology in order to detect mali-cious traffic. Tartarus implements a dynamic quarantine technique to mitigate the spread of self propagating malware on a production net-work. In order to better understand the spread and impact of internet worms Tartarus is used to construct a detailed demographic of poten-tially malicious hosts on the internet. This host demographic is in turn used as a blacklist for firewall rule creation. The sources of malicious traffic is then illustrated through the use of a geolocation based visuali-sation.
- Full Text:
- Date Issued: 2011
- Authors: Hunter, Samuel O , Irwin, Barry V W
- Date: 2011
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428629 , vital:72525 , https://d1wqtxts1xzle7.cloudfront.net/96055420/Hunter-libre.pdf?1671479103=andresponse-content-disposi-tion=inline%3B+filename%3DTartarus_A_honeypot_based_malware_tracki.pdfandExpires=1714722666andSignature=JtPpR-IoAXILqsIJSlmCEvn6yyytE17YLQBeFJRKD5aBug-EbLxFpEGDf4GtQXHbxHvR4~E-b5QtMs1H6ruSYDti9fIHenRbLeepZTx9jYj92to3qZjy7UloigYbQuw0Y6sN95jI7d4HX-Xkspbz0~DsnzwFmLGopg7j9RZSHqpSpI~fBvlml3QQ2rLCm4aB9u8tSW8du5u~FiJgiLHNgJaPzEOzy4~yfKkXBh--LTFdgeAVYxQbOESGGh9k5bc-LDJhQ6dD5HpXsM3wKJvYuVyU6m83vT2scogVgKHIr-t~XuiqL35PfI3hs2c~ZO0TH4hCqwiNMHQ8GCYsLvllsA__andKey-Pair-Id=APKAJLOHF5GGSLRBV4ZA
- Description: On a daily basis many of the hosts connected to the Internet experi-ence continuous probing and attack from malicious entities. Detection and defence from these malicious entities has primarily been the con-cern of Intrusion Detection Systems, Intrusion Prevention Systems and Anti-Virus software. These systems rely heavily on known signatures to detect nefarious traffic. Due to the reliance on known malicious signa-tures, these systems have been at a serious disadvantage when it comes to detecting new, never before seen malware. This paper will introduce Tartarus which is a malware tracking and mitigation frame-work that makes use of honeypot technology in order to detect mali-cious traffic. Tartarus implements a dynamic quarantine technique to mitigate the spread of self propagating malware on a production net-work. In order to better understand the spread and impact of internet worms Tartarus is used to construct a detailed demographic of poten-tially malicious hosts on the internet. This host demographic is in turn used as a blacklist for firewall rule creation. The sources of malicious traffic is then illustrated through the use of a geolocation based visuali-sation.
- Full Text:
- Date Issued: 2011
Cyber security: Challenges and the way forward
- Ayofe, Azeez N, Irwin, Barry V W
- Authors: Ayofe, Azeez N , Irwin, Barry V W
- Date: 2010
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428613 , vital:72524 , https://d1wqtxts1xzle7.cloudfront.net/62565276/171920200330-53981-1mqgyr5.pdf?1585592737=andresponse-content-disposi-tion=inline%3B+filename%3DCYBER_SECURITY_CHALLENGES_AND_THE_WAY_FO.pdfandExpires=1714729368andSignature=dPUCAd1sMUF-gyDTkBFb2lzDvkVNpfp0sk1z-CdAeHH6O759dBiO-M158drmJsOo1XtOJBY4tNd8Um2gi11zw4U8yEzHO-bGUJGJTJcooTXaKwZLT-wPqS779Qo2oeiQOIiuAx6zSdcfSGjbDfFOL1YWV9UeKvhtcnGJ3p-CjJAhiPWJorGn1-z8mO6oouWzyJYc0hV0-Po8yywJD60eC2S6llQmfNRpX4otgq4fgZwZu4TEcMUWPfBzGPFPNYcCLfiQVK0YLV~XdTCWrhTlYPSMzVSs~DhQk9QPBU7IGmzQkGZo3UXnNu1slCVLb9Dqm~9DSbmttIXIDGYXEjP9l4w__andKey-Pair-Id=APKAJLOHF5GGSLRBV4ZA
- Description: The high level of insecurity on the internet is becoming worrisome so much so that transaction on the web has become a thing of doubt. Cy-bercrime is becoming ever more serious and prevalent. Findings from 2002 Computer Crime and Security Survey show an upward trend that demonstrates a need for a timely review of existing approaches to fighting this new phenomenon in the information age. In this paper, we provide an overview of Cybercrime and present an international per-spective on fighting Cybercrime. This work seeks to define the concept of cyber-crime, explain tools being used by the criminals to perpetrate their evil handiworks, identify reasons for cyber-crime, how it can be eradicated, look at those involved and the reasons for their involve-ment, we would look at how best to detect a criminal mail and in conclu-sion, proffer recommendations that would help in checking the increas-ing rate of cyber-crimes and criminals.
- Full Text:
- Date Issued: 2010
- Authors: Ayofe, Azeez N , Irwin, Barry V W
- Date: 2010
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428613 , vital:72524 , https://d1wqtxts1xzle7.cloudfront.net/62565276/171920200330-53981-1mqgyr5.pdf?1585592737=andresponse-content-disposi-tion=inline%3B+filename%3DCYBER_SECURITY_CHALLENGES_AND_THE_WAY_FO.pdfandExpires=1714729368andSignature=dPUCAd1sMUF-gyDTkBFb2lzDvkVNpfp0sk1z-CdAeHH6O759dBiO-M158drmJsOo1XtOJBY4tNd8Um2gi11zw4U8yEzHO-bGUJGJTJcooTXaKwZLT-wPqS779Qo2oeiQOIiuAx6zSdcfSGjbDfFOL1YWV9UeKvhtcnGJ3p-CjJAhiPWJorGn1-z8mO6oouWzyJYc0hV0-Po8yywJD60eC2S6llQmfNRpX4otgq4fgZwZu4TEcMUWPfBzGPFPNYcCLfiQVK0YLV~XdTCWrhTlYPSMzVSs~DhQk9QPBU7IGmzQkGZo3UXnNu1slCVLb9Dqm~9DSbmttIXIDGYXEjP9l4w__andKey-Pair-Id=APKAJLOHF5GGSLRBV4ZA
- Description: The high level of insecurity on the internet is becoming worrisome so much so that transaction on the web has become a thing of doubt. Cy-bercrime is becoming ever more serious and prevalent. Findings from 2002 Computer Crime and Security Survey show an upward trend that demonstrates a need for a timely review of existing approaches to fighting this new phenomenon in the information age. In this paper, we provide an overview of Cybercrime and present an international per-spective on fighting Cybercrime. This work seeks to define the concept of cyber-crime, explain tools being used by the criminals to perpetrate their evil handiworks, identify reasons for cyber-crime, how it can be eradicated, look at those involved and the reasons for their involve-ment, we would look at how best to detect a criminal mail and in conclu-sion, proffer recommendations that would help in checking the increas-ing rate of cyber-crimes and criminals.
- Full Text:
- Date Issued: 2010
Wireless Security Tools
- Janse van Rensburg, Johanna, Irwin, Barry V W
- Authors: Janse van Rensburg, Johanna , Irwin, Barry V W
- Date: 2006
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429867 , vital:72647 , https://digifors.cs.up.ac.za/issa/2006/Proceedings/Research/113_Paper.pdf
- Description: Detecting and investigating intrusive Internet activity is an ever-present challenge for network administrators and security researchers. Network monitoring can generate large, unmanageable amounts of log data, which further complicates distinguishing between illegitimate and legiti-mate traffic. Considering the above issue, this article has two aims. First, it describes an investigative methodology for network monitoring and traffic review; and second, it discusses results from applying this method. The method entails a combination of network telescope traffic capture and visualisation. Observing traffic from the perspective of a dedicated sensor network reduces the volume of data and alleviates the concern of confusing malicious traffic with legitimate traffic. Com-plimenting this, visual analysis facilitates the rapid review and correla-tion of events, thereby utilizing human intelligence in the identification of scanning patterns. To demonstrate the proposed method, several months of network telescope traffic is captured and analysed with a tai-lor made 3D scatter-plot visualisation. As the results show, the visuali-sation saliently conveys anomalous patterns, and further analysis re-veals that these patterns are indicative of covert network probing activi-ty. By incorporating visual analysis with traditional approaches, such as textual log review and the use of an intrusion detection system, this re-search contributes improved insight into network scanning incidents.
- Full Text:
- Date Issued: 2006
- Authors: Janse van Rensburg, Johanna , Irwin, Barry V W
- Date: 2006
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/429867 , vital:72647 , https://digifors.cs.up.ac.za/issa/2006/Proceedings/Research/113_Paper.pdf
- Description: Detecting and investigating intrusive Internet activity is an ever-present challenge for network administrators and security researchers. Network monitoring can generate large, unmanageable amounts of log data, which further complicates distinguishing between illegitimate and legiti-mate traffic. Considering the above issue, this article has two aims. First, it describes an investigative methodology for network monitoring and traffic review; and second, it discusses results from applying this method. The method entails a combination of network telescope traffic capture and visualisation. Observing traffic from the perspective of a dedicated sensor network reduces the volume of data and alleviates the concern of confusing malicious traffic with legitimate traffic. Com-plimenting this, visual analysis facilitates the rapid review and correla-tion of events, thereby utilizing human intelligence in the identification of scanning patterns. To demonstrate the proposed method, several months of network telescope traffic is captured and analysed with a tai-lor made 3D scatter-plot visualisation. As the results show, the visuali-sation saliently conveys anomalous patterns, and further analysis re-veals that these patterns are indicative of covert network probing activi-ty. By incorporating visual analysis with traditional approaches, such as textual log review and the use of an intrusion detection system, this re-search contributes improved insight into network scanning incidents.
- Full Text:
- Date Issued: 2006
Lessons Learned In The Global Deployment Of An Open Source Security Solution
- Authors: Irwin, Barry V W
- Date: 2004
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428830 , vital:72539 , https://digifors.cs.up.ac.za/issa/2004/Proceedings/Research/023.pdf
- Description: This paper covers the lessons learned, and the challenges facing the deployment of an Open Source derivative firewall and Virtual Private Network (VPN) solution into the established commercially driven Tele-communications arena. The lessons learned and issues discussed are drawn from the author’s own experiences having worked for a global Wireless Application Service Provider over a three year period. Focus is placed on the issues surrounding the integration of the open source equipment with that of the established global Telecommunications players, where compliance to existing network standards was a re-quirement for connectivity. Major stumbling blocks to the acceptance and success of the open source product were the concerns expressed by the Telecommunications Operators relating to interoperability and troubleshooting facilities when interfaced to their existing commercially available equipment and existing rigid telecommunications networks. The processes resulting in the initial decision to utilise an open source solution in preference to commercial offerings are also explored. An open source solution was found to offer higher flexibility and functionali-ty and greater return on investment whilst maintaining a significantly re-duced cost in comparison to commercial solutions available.
- Full Text:
- Date Issued: 2004
- Authors: Irwin, Barry V W
- Date: 2004
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/428830 , vital:72539 , https://digifors.cs.up.ac.za/issa/2004/Proceedings/Research/023.pdf
- Description: This paper covers the lessons learned, and the challenges facing the deployment of an Open Source derivative firewall and Virtual Private Network (VPN) solution into the established commercially driven Tele-communications arena. The lessons learned and issues discussed are drawn from the author’s own experiences having worked for a global Wireless Application Service Provider over a three year period. Focus is placed on the issues surrounding the integration of the open source equipment with that of the established global Telecommunications players, where compliance to existing network standards was a re-quirement for connectivity. Major stumbling blocks to the acceptance and success of the open source product were the concerns expressed by the Telecommunications Operators relating to interoperability and troubleshooting facilities when interfaced to their existing commercially available equipment and existing rigid telecommunications networks. The processes resulting in the initial decision to utilise an open source solution in preference to commercial offerings are also explored. An open source solution was found to offer higher flexibility and functionali-ty and greater return on investment whilst maintaining a significantly re-duced cost in comparison to commercial solutions available.
- Full Text:
- Date Issued: 2004
- «
- ‹
- 1
- ›
- »