Back You are here: Home Activity

TCII'S RESEARCH OVERVIEW

Our Motto: Creating value by making connections

Vision Statement

"To be the world leading research group in multi, inter and trans-disciplinary collaborative research, empowered by cutting edge by real time automation, information technologies and communication intelligence."

We are actively engaged with researchers around the world in science, social science, engineering and industries, nationally and internationally to address important issues that confront Defence, business and humanity today.

TC-II Research Area Highlights

TCII Research Area 1: Data Mining and Big data Analytics

Data mining is the process of analysing data, finding correlations and patterns through intelligent computer technologies, various statistical analytic tools and software that turn the large complex data set into meaningful information and knowledge. In Data Mining, we study and conduct research into the following topics:

  • Complex data structures including trees, graphs, text and spatial-temporal data
  • Mixed information types including image, video, web and text
  • Data that is distributed across multiple data sources
  • Joint mining of structured, semi-structured and unstructured information
  • Rare events whose significance could be masked by more frequently occurring events
  • Data, text and content mining
  • Web mining and document management
  • Knowledge discovery, representation and knowledge mining
  • Integrating a knowledge base from a data mining system and applying this knowledge during the data mining
  • Integrating a wide range of data mining techniques and methods and deriving incremental new knowledge from large data sets and prior knowledge

The types of knowledge extracted from information mining activities include:

  • Embedded structures and relationships leading to associations between these embedded structures and embedded trees rather than just between simple variables; and also
  • Knowledge that conforms to a certain model structure to enhance the model.

The sources of information addressed in such mining activities include:

  • Information gathered by a corporation or enterprise about its clients;
  • Information provided by customers or viewers as a result of their own choice such as product reviews and trustworthiness information. Such information is of considerable importance in trust, reputation and risk assessment systems; and
  • Information on social networking sites, to permit opinion mining.

1) Mining of Data with Complex Structures (Researchers: Henry Tan, Elizabeth Chang, Tharam Dillon)

The emergence of data with complex structures is evident in many domains such as, Bioinformatics, Chemistry, Web Intelligence applications, Business Process Modelling, scientific knowledge management, etc. When such data sources are used, the attributes of the domain are often organised in a hierarchical (tree) or graph structure to enable a more semantic representation of (complex) properties and relationships of data objects. Mining of such data poses additional challenges in the data mining field, as it requires the development of data mining methods capable of taking into account the complex structures of the data and preserving the relationships and structural properties in the extracted knowledge patterns. To enable association rule mining of tree-structured data (e.g. XML), we have developed a general framework for mining of frequent ordered and unordered induced / embedded subtrees from a database of rooted, ordered, labelled trees using any of the existing support definitions. A number of important applications of tree mining are also investigated, such as web mining, knowledge matching, protein structure analysis and mining of health information. More recent work is focused on the use of classification and clustering techniques on tree-structured data.

2) Data Quality Assurance Researchers (Researchers: Elizabeth Chang, Alex Talevski)

Traditional research on data quality and data cleaning focuses on individual records (i.e. intra-record) stored on the databases. A number of techniques have been proposed for data cleansing using common statistical, clustering or inference based techniques. All of these operate on the schema level and intra-instance levels only to clean existing data in the database. They do not provide any method to ensure that future data entered into the system is free from errors. Our focus in on a data cleansing approach that operates on the business logic, schema, record matching and intra-instance levels so as to not only clean existing data errors in a given database, but ensures that newly entered data are error free. Automated and scalable data mining techniques are used in each of the phases of data cleansing process, to identify common and unexpected types of errors in the database. Automated techniques are used to detect the source of the data quality problems through business activity and system logic monitoring, and prevent their re-occurrence by validation and standardization of both data entry and business executions.

3) Conjoint Mining of Complementary Data (Researchers: Elizabeth Chang, Tharam Dillon)

Digital information within an enterprise consists of (1) structured data and (2) semi-structured/unstructured content. The structured data includes enterprise and business data like sales, customer and product information, accounts, inventory, etc. while the content includes contracts, emails, customer opinions, transcribed calls, etc. The structured data is managed by the relational database system (RDB), the semi/unstructured content is managed by the content manager, and these two exist as silos. This leads to information of different types being in their own silos being managed and queried separately. This is undesirable since the information content of these sources is complementary. This project is concerned with developing a methodology and techniques for deriving business intelligence and novel knowledge from structured, semi-structured, and unstructured information conjointly. This methodology would be useful in several business intelligence and information integration applications, such as managing customer attrition, targeted marketing, fraud detection and prevention, compliance and customer relationship management as well as a number of fields which involve information of high complexity such as Bioinformatics, Logistics, Business and Electrical Power Systems (EPS). The broad aims of the project are to:

use structured data to disambiguate text segments and link them to records for BI investigations

use ontologies to disambiguate and annotate content segments to permit BI investigations

develop methods of BI analysis for conjoint analysis of linked structured and content elements

Apply these techniques for investigation of business problems, biomedical problems and EPS.

4) Stream Data Mining (Researchers: Alex Talevski, Pedram Radmand)

The "Secure Wireless Sensor Networks" project delivered a commercial grade technical report on time & within budget. This successful outcome is the basis for a newly proposed "Decision Support" project and we also together with Statoil, SINTEF, ABB, Dust Networks, & others, we secured a $5M Norwegian Research Council grant focused on Wireless Process Control. DEBII is an integral research partner.

5) Ontology Based Data Warehousing and Mining (Researcher: Shastri Nimmagadda)

The key project objective is to develop ontology and knowledge based multidimensional data warehousing and data mining methodologies for organizing heterogeneous data. Historical heterogeneous data are available in energy and resources industries and so far no systematic approach is made available to address the issues of data integration, knowledge sharing among different entities and dimensions. This research will contribute to resources industries in terms of optimizing resources and forecast at different operational units and also make strategic technical and financial decisions. This study will support energy industries, in particular the petroleum industries, in terms of developing sustainable technologies and managing corporate data sources more judiciously.

6) Visualising and Mining Semi-Structured Information (Researchers: Mahsa Mooranian, Elizabeth Chang, Tharam Dillon)

Businesses and Organisations need efficient techniques for storing, searching and visualising their day-to-day information. Such information is a valuable asset for them in various applications such as analysing and maintaining their customer relationship management (CRM), to increase their productivity, sales and services and in turn increase their revenue. At present, the main source of information comes from business applications such as customer comments and communications, trade publications, internal research reports and competitor web sites. But most of this knowledge is stored in textual format and one of the characteristics of that format is that it is not readily assimilated and understood. So techniques are needed by which such information is pre-processed according to the required objectives, to make sense and to carry out the correct analysis from it. To achieve that, this project aims to develop an ontology-based visualisation framework that employs semantic reasoning, and conceptual visualisation to retrieve ideas, opinions, experiences and wisdom within text content and make knowledge from it. The system would be evaluated using students’ essays and its contribution to save time, cost and resolve reliability issues in scoring essay would be assessed.

7) Data Mining for AEG (Researchers: Anhar Fazal, Elizabeth Chang, Tharam Dillon)

The Aim of our research is to propose a Neural Network based Hybrid model of an Automated Essay Grading system that uses a combination of Natural Language Processing (NLP) techniques and intelligent techniques to grade essays. Essay grading is an essential part of every teacher’s job and a time-consuming activity, proving to be an expensive task for the government. Given the increasing number of students every year, the essay grading process becomes monotonous and onerous for the teacher, resulting in inefficiency and inaccuracy during the grading process. An automatic essay grading (AEG) system will be most desirable in such a scenario to reduce the teacher’s marking time, thus improving efficiency and also, resulting in saving tax-payer’s money. For over forty years, several AEG systems have been proposed each with its own advantages and disadvantages. However all of these approaches fail to model an essential aspect of grading essays, that is to model the non-linear nature of relationship between the feature vector of the essay and the essay grade. To overcome this drawback, we propose an intelligent AEG system that will grade essays in real time.

8) Barcode Watermarking (Researchers: Vidyasagar Potdar, Song Han, Christopher Jones)

Many digital watermarking algorithms are proposed in the literature. Broadly, these watermarking algorithms can be classified into two main categories. The first category of algorithms uses a Pseudo Random Gaussian Sequence (PRGS) watermark, whereas the second category of algorithms uses a binary logo as a watermark. The main advantage of the PRGS-based watermarking scheme is its ability to detect the presence of a watermark without manual intervention. However, the main drawback is the calculation of a reliable threshold value. In a similar manner, the main advantage of binary logo watermarking is that there is no need to calculate a threshold value, but it requires manual intervention to detect the presence of a watermark. The advantage and disadvantage of either approach is quite clear and it would be a good idea to design a watermarking scheme which incorporates the advantages of both of these approaches. In this project, we worked on one such approach called bar-code watermarking. The proposed scheme offers a means of objective as well as subjective detection. A PRGS sequence watermark is represented as a bar-code on a binary logo and embedded in the host image. Watermark detection can be done either subjectively or objectively.

TCII Research Area 2: Cyber Physical Systems

The concept of the Web of Things (WoT) and Cyber-Physical Systems (CPS) raises the new requirement of connecting the cyber world with the physical world. The integration of digital computation and communication with physical monitoring and control allow for robust and flexible systems with multi-scale dynamics for managing the flows of mass, energy, and information in a coherent way. Numerous scientific, social and economical issues will emerge, for example the issue of energy shortage. The current state-of-the-art in energy production and distribution (e.g. the utility grid) lacks infrastructure to cope with inefficient energy use. In order to produce smart energy and to consume energy efficiently, a crucial move is to adopt a framework that tightly integrates the cyber world and the physical world. This will allow for both digital information as well as traditional energy (e.g. electricity) to flow through a two-way smart infrastructure connecting everything surrounding us.

Our vision of CPS is as follows: networked information systems that are tightly coupled with the physical process and environment through a massive number of geographically distributed devices. As networked information systems, CPS involves computation, human activities, and automated decision making, enabled by information and communication technology. More importantly, these computation, human activities and intelligent decisions are aimed at monitoring, controlling and integrating the physical processes and environment to support operations and management in the physical world. The scale of such information systems range from micro-level and embedded systems to ultra-large systems. Although devices provide the basic interface between the cyber world and the physical one, many key issues (e.g. uncertainty, inaccuracy, etc.) common to physical systems are not captured and fully dealt with in the computing domain. Similarly, computational complexity, system evolution, security and software failure is often ignored from the physical system viewpoint, which treats computation as a precise, error-free, static 'black-box'. The solution to CPS must break the boundary between the cyber and the physical by providing a unified infrastructure that permits integrated models addressing issues from both worlds simultaneously.

Cyber Physical Systems is empowered by Cyber Information Engineering provides technologies for collaborative systems, cloud services and web intelligence, these include:

  • Virtual coalitions characterised by collections of autonomous agents that work together as a result of their own choice of either temporary or long-term coalitions;
  • Virtual coalitions characterised by collections of autonomous agents that work together as a result of their own choice of either temporary or long-term coalitions;
  • Peer-to-peer systems where entities interact at the same level; and
  • Semantic web services that use compositions of web services to build complex applications.

A key element of this work is the representation of semantics to allow agents to interact with a common understanding of the underlying meaning. An essential aspect here is the creation of ontologies that are shared representations of knowledge in an area capturing concepts, relationships and constraints. Another key element is the Interactive Web or Web 2.0 technologies such as mashups, gadgets and social networks.

  • Underlying technologies to support this work are:
  • Cyber security, fraud, spam and intrusion detection
  • Cyber privacy and risk management
  • Cyber trust and accountability
  • XML-based systems and document security
  • Information hiding, fingerprint and digital watermarking
  • Social networks and social responsibility
  • Real-time systems on the web
  • Ontologies and multi-agent systems
  • Soft grids and semantic web services
  • Value of information as a foundation for DES

Cyber Physical Systems is empowered Human Space Computing shifts the focus from empower industrial technologies to empower people, health and environment. These include:

  • Bluetooth, mobile computing and digital pens
  • Wireless technologies (VoIP, Wi-Fi, IRDA, RFID, GSM, GPRS, 3G …)
  • Software solutions & interfaces (Symbian, Windows Mobile, Palm, J2ME, XML, multi-agents)
  • Telecommunication convergence using wireless devices (video, voice, data)
  • Wireless devices in the resource industry and manufacturing automation
  • Sensor networks and track-and-trace solutions
  • Talking emails and mobile conferences
  • Personal space security and privacy
  • Convergence technologies
  • Extreme interfaces

The studies of these technologies are primarily focused on industries, transportation and manufacturers.

9) Smart Energy through Cyber Physical Systems (Researchers: Alex Talevski, Steve Wallis; Chen Wu, Elizabeth Chang, Markus Lanthaler, Pedram Radmand)

The study aims to reduce energy consumption & related emissions for remote resoources industry, such as oil and gas industries where power is generated through the supply of Diseal to the operation field or Mining Camps. Through state-of-the art cyber physical systems especially combined Smart Grid, Smart Meters and Smart Home technologies, we can monitor, control and reduce energy & emissions as well as to improve the quality of life of the environment, work place and live conditions that reside in the mining sites or rural camps.

10) Cloud Computing (Researchers: Tharam Dillon, Chen Wu, Mohammed Alhamad)

Cloud computing uses the notion of the utility computing in conjunction with software as a “Service” to deliver applications. Applications may be moved to a cloud for economic reasons (pay as you use philosophy) or for dealing with massive computing demands through the use of virtualisation to create the flexibility for accessing large processing or storage capacity. Its technical aspects are generally broken down into three levels: Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). The key capabilities of cloud computing are: on-demand processing, on-demand data store, on-demand data throughput, and software as a service. Several research questions need to be addressed for the underlying technical problems for cloud platform infrastructure, including:

  • What is the right software architecture for both public and private Clouds?
  • What is the right multi-tenants data architecture that can deal with massive data storage, data security, confidentiality and privacy?
  • How to factor in application (including crowd-sourced systems) and mashup specific characteristics?
  • What are optimal parallelisation models for different types of applications/domain or programming styles?
  • What are the interfacing and integration mechanisms to incorporate SaaS, PaaS and IaaS as a scalable and efficient Cloud?

11) Secure Wireless Senor Network for Statoil (Researchers: Alex Talevski, Pedram Radmand, Elizabeth Chang)

The "Secure Wireless Sensor Networks" project delivered a commercial grade technical report on time & within budget. The report illustrated many great opportunities in using Wireless Sensor Network technology in the resources, oil & gas fields. However, it also found significant security vulnerabilities, which must be addressed before adopting this technology for critical control & monitoring tasks. The document now forms a fundamental knowledge base within Statoil. Various extracts are now found within numerous internal policy documents. This successful outcome is the basis for a newly proposed "Decision Support" project.

12) Telecommunication Convergence (Researchers: Alex Talevski, Elizabeth Chang)

Organisational alliances are rapidly being formed as a means of effective cooperation with a common goal within a targeted value chain. The combination of such communication, coordination and cooperation leads to new organisational forms and scenarios within the Digital Ecosystem space that require technological support. Convergence refers to the move towards the use of a single united interaction medium and media. Such a solution enables telecommunication services that are concurrently coupled with enterprise and internet data. However, such converged telecommunications and data services have been largely restricted to static environments where fixed Personal Computers (PC) and network connections are used in conjunction with customised software tools that simulate pseudo converged sessions. Generally, data presented on the internet and in enterprise applications is not available in a mobile wireless environment. The diverse nature of this environment demands a feature-rich, flexible and widely accessible solution.

13) Low-Power Embedded Wireless Multimedia (Researchers: Atif Sharif, Vidyasagar Potdar)

The availability of inexpensive hardware such as CMOS cameras and microphones has fostered the development of Wireless Multimedia Sensor Networks (WMSNs). Such networks of wireless interconnected devices are able to retrieve multimedia content such as video and audio streams, still images, and scalar sensor data from the environment. This research aims to develop a prototype of a ‘video node’, a miniaturised electronic system comprising of three main components: a solid-state video sensor, a digital processing subsystem that can run locally complex signal processing operations, and a digital radio interface to connect with other similar nodes in order to form a cooperating network. This project has very innovative features: (i) its ability to perform processing locally; (ii) its low power consumption that makes it suitable for battery operation or environmental power supply; (iii) its low power radio interface that can link neighboring nodes in a network; (iv) its flexibility by means of field reconfiguration (both of the local processing as well as of the communication protocols); and (v) development of transport functions for efficient and reliable multimedia transmission in WMSNs, taking into account the already mentioned requirements and constraints inherent to these networks.

14) Smart Home (Researchers: Tharam Dillon, Omar Hussain, Omid Ameri)

Smart Grid is a novel initiative that aims to deliver energy to the users and also to achieve efficiency in its consumption by having a two-way communication. The smart grid architecture is a combination of various hardware devices; management and reporting software tools that are coupled together by the ICT infrastructure. This infrastructure is needed to make the smart grid sustainable, creative and intelligent. One of the main goals of smart grid is to achieve Demand Response (DR) by increasing the end users’ participation in decision making and increasing the awareness that will lead them to manage their energy consumption in an efficient way. The aim of this research is to develop an approach by which Demand Response is achieved on a continuous basis at the home level. To achieve this, the dynamic notion of price will be utilised to develop an intelligent decision-making model that will assist the users in achieving demand response.

15) SoftGrid (Researchers: Tharam Dillon, Elizabeth Chang, Pornpit Wongthongtham)

The aim of this project is to provide a public semantic grid infrastructure, i.e. the SoftGrid, which supports an array of computing and data intensive e-science and e-engineering applications that are crucial for collaborative scientific research endeavours and economic development (e.g. the resources sector) across the Australian research institutes and government agencies. SoftGrid will empower scientists distributed around the globe to collaboratively carry out intensive computational and communicational research. This has a huge impact on the Australian research community, in particular, in those sustainable research areas such as climate change and mining exploration, which often require a massive amount of user involvement and computation (e.g. climate simulation, seismic-based forecasting and the like) in a collaborative fashion. The proposed SoftGrid infrastructure will enable Australian researchers to efficiently and accurately locate needed information from a plethora of data within the computing grid. It will help researchers to communicate, connect and share research information within a virtual research team. More importantly, it allows researchers to rapidly construct their desired simulation environments through easy customisation and ‘mashup’ of pre-existing simulation test beds without having to build a simulation environment from scratch. These benefits dramatically enhance the productivity of scientists when undertaking various computational and time-consuming collaborative research activities essential for Australia’s economic development, such as in the mining and resource sectors.

16) Smart Grid (Researchers: Jaipal Singh, Alex Talevski, Vidyasagar Potdar)

Smart Grid is a novel initiative that aims to deliver energy to the users and also to achieve efficiency in its consumption by having a two-way communication. The smart grid architecture is a combination of various hardware devices; management and reporting software tools that are coupled together by the ICT infrastructure. This infrastructure is needed to make the smart grid sustainable, creative and intelligent. One of the main goals of smart grid is to achieve Demand Response (DR) by increasing the end users? participation in decision making and increasing the awareness that will lead them to manage their energy consumption in an efficient way. Many approaches have been proposed in the literature to achieve demand response at the different levels of the smart grid. But no approach has been proposed that focuses to achieve demand response from the users’ point of view at the home level on a continuous basis and on an intelligent way. The aim of this research is to develop such an approach by which Demand Response is achieved on a continuous basis at the home level. To achieve this, the dynamic notion of price will be utilized to develop an intelligent decision-making model that will assist the users in achieving demand response.

TCII Research Area 3: Semantic Technology

Semantic Technology separates meaning from data. This allows disparate data sources to communicate and interoperate with each other. Semantic Technology makes use of ontologies, which are shared and agreed upon conceptualizations of a domain. Ontologies are both machine-readable and readable by humans. This additional layer of abstraction was developed to increase interoperability between different databases and to make the proliferation of various (historical) data formats manageable. When applied to the Web, we call this concept the Semantic Web (coined by Tim Berners-Lee). The Semantic Web refers to the ‘web of data’, which is considered to be the next generation of the Web. In the Semantic Web machines can understand the meaning of data. This allows for more efficiency as machines rather than humans can perform computations regarding content, relevancy, relations, etc.

Semantic technologies and their principles are widely used in industry and academia today, also in conjunction with other technologies such as data mining, software agents, recommender algorithms, search technologies, social networks etc. At DEBII, we study and develop semantic technology and ontologies for business domains, including but not limited to:

-Ontology engineering, design and implementation

-Ontology systems and onto-servers

-Ontology, sub-ontology and commitment

-Ontology merging and alignment

-Ontology learning and evolution

-Ontology presentation tools and applications

-Semantic Web and web semantics

-Data semantics and business semantics

KEY PROJECTS

17) Ontology Learning (Researchers: Tharam Dillon, Pornpit Wongthongtham)

The aim is to apply the developed tree mining algorithms to enable ontology building through the matching of existing knowledge representations from the same domain. The main problem to be addressed in this process is to find semantically correct matches among the concept terms in heterogeneous knowledge representations. We will initially avoid considering concept labels as a guide for the formation of candidate mappings but rather use the structural information in which concepts occur in a particular knowledge representation. Taking the structural position of the concept term nodes is, to a certain extent, a promising approach for considering the context in which the concept terms are used. Taking context into consideration is one of the main difficulties in existing approaches. As opposed to matching concepts based upon label comparison, taking the structural aspects into account will indicate possible complex matches (i.e. cases where a concept term in one knowledge representation maps to multiple concept terms in another knowledge representation). The relations considered are limited to the subsumption relations implied by the concept hierarchy or taxonomy. In this respect, the two main problems considered are the matching of knowledge representations at the conceptual and structural levels. Once efficient graph mining approaches have been developed, a similar idea will be applied for obtaining a graph-structured ontology through matching of graph-structured, heterogeneous, knowledge representations of the same domain.

18) Defeasible Argumentative Reasoning(Researchers: Naeem Janjua)

The aim of this research is to propose and validate a framework for carrying out defeasible argumentative reasoning in semantic web applications. Web is a source of huge amount of data and semantic web efforts are targeted towards making web data contents machines understandable. This will have significant impact on the way information is exchanged and businesses are conducted. As ontology layer of semantic web has got enough maturity (i.e. standards like RDF, RDFs, OWL, OWL 2) the next step is to work on logic layer for development of advance reasoning capabilities for knowledge extraction and efficient decision making. Adding logic to web means use of rules to make inferences. Rules are a way to express business processes, policies, contracts etc but most of the studies have focused on use of monotonic logics in layered development of semantic web which provides no mechanism for representing mechanism for representing incomplete information and handling of contradictory information. These limitations are inherited in Description Logics being subset of predicate logic. Defeasible logic programming is based on nonmonotonic logic and has been used in software agents for carrying out goal driven defeasible reasoning. Defeasible reasoning is a rule based approach to perform reasoning on incomplete, inconsistent and uncertain information and priorities are used to resolve conflicts among rules. Semantic web is source of defeasible knowledge as its open by nature and subject to inconsistencies deriving from multiple sources; therefore, it is not possible to define priorities in advance among conflicting rules. Additionally, quantitative approaches for reasoning on semantic web are also criticized for their in ability to generate easy to understand and logically clear result. We are interested in to exploiting the power of defeasible logic and argumentation for data driven reasoning on semantic web by identifying the issues involved in mapping of RDF/OWL ontologies to defeasible logic programming, how to carry out argumentative reasoning on semantic web, how DeLP rules can be shared on web and how tractable, customizable results can be represented to user.

19) Ontology Evolution in Defence Aseet Management (Researchers: Naeem Janjua, Pornpit Wongthongtham, Ahmed Aseeri)

This research aims to develop a framework to particularly assist Software Engineering Ontology evolution and management by using a semantic wiki based approach. Software Engineering Ontology encompasses common shareable software engineering knowledge and software engineering concepts, how and why they are related. However, most existing ontologies including the Software Engineering Ontology are derived from a single perspective which can be complicated for users, have a lack of maintainability, and become obsolete and impracticable ontologies. Additionally, the knowledge encoded in the ontologies is not static but should evolve. To overcome these impediments to the Ontology Evolution, this research will use a semantic wiki based approach which provides an environment for discussion and formalizing ontology issues, supports ontology evolution, and assist maintaining the versions of ontology.

Ontology evolution is one of the main issues that Ontology users face today. The issues include lack of communication between offshore and onshore staff in the development and evolving ontology in Defence domain. Additionally, a lot of work for ontology engineers involves updating the ontology manually. The process of evolving the ontology requires work from the domain experts. There has not been a cost effective communication method to discuss and agree upon the changes. In this project, the use of a lightweight community-driven approach is developed in order to enhance the communication on the ontology evolution. This will enable a more efficient and effective method to communicate and take decisions in a better way.

20) Reconfigurable Software Architectures (Researchers: Alex Talevski, Elizabeth Chang, Tharam Dillon, Chen Wu)

Research has shown that component-based software engineering leads to software that exhibits higher quality, shorter time-to-market characteristics and, therefore, lowers development costs. However, developing a software application by statically integrating components is viewed as static component composition. Using this approach, it is difficult to modify a software system after it has been deployed. Re-configuration, addition, removal or replacement of components may require significant modifications to the application source-code. Such modifications have proven to be error-prone, time consuming and expensive. Therefore, statically composed software development is suited to well-defined applications with small user-bases that rarely change. However, large-scale software accounts for 85% of all software development undertakings. Such software is typically very complex and inflexible. In order to satisfy large-scale software development efforts, a framework and platform is required to facilitate the integration and reconfiguration of components. We propose a reconfigurable plug-and-play software framework and platform that enables application composition and reconfiguration. This framework and platform realise the concept of ‘model once, generate anywhere’.

21) Software Autotunning and Testing (Researchers: Dr. David McMeekin; Elizabeth Chang)

The Software autotunning and testing uses a full-circle approaches to software testing. It is starting from business requirements and high-level design, to lower level computing and programming. It has the capabilities to test the entire software development and deployment lifecycle facilitating the prevention and/or discover quality issues early in the development and deployment lifecycle. Our testing lab has a host of unit, integration, system and user acceptance testing capabilities with much of the testing process being automatic and semi-automatic. The testing methodologies include: User Acceptance Testing, Functional Testing, Load and Stress Testing, Test Management, Requirements Management.

TCII Research Area 4: Real Time Systems

22) RFID Authentication (Researchers: Song Han, Vidyasagar Potdar, Elizabeth Chang, Tharam Dillon)

RFID, as an anti-counterfeiting technology, has enormous potential in industrial, medical, business and social applications. RFID-based identification is an example of an emerging technology which requires authentication. In this project, we will design mutual authentication protocols which are server-less / server-based, monitor-based authentications for RFID tags. The RFID reader and tag will carry out the authentication based on their synchronised secret information. The synchronised secret information will be monitored by a component of the database server in the server-based model and also by a component of the RFID reader in the server-less model. Our protocols also support the low-cost, non-volatile memory of RFID tags. This is desirable since non-volatile memory is an expensive unit in RFID tags.

23) RFID Tamper Detection (Researchers: Vidyasagar Potdar, Chen Wu, Christopher Jones)

Security and privacy are primary concerns for RFID (radio frequency identification) adoption. While the mainstream RFID research is focused on solving the privacy issues, this project focuses on security issues in general and data tampering in particular. We specifically consider the issue of detecting data tampering on the RFID tags for applications such as data integrity management. To address this issue, we developed a novel fragile watermarking scheme, which embeds a fragile watermark (or pattern) in the serial number partition of the RFID tag. This pattern is verified in order to identify whether or not the data on the RFID tags has been tampered with. The novelty of this watermarking scheme lies in the fact that we have applied watermarking technology to RFID tags. In comparison, most of the existing watermarking schemes are limited to images, audio or video applications. We call this scheme TamDetect because it is a tamper detection solution. TamDetect is designed so that it can be easily plugged into existing RFID middleware applications. This research is one of the first works to integrate watermarking and RFID technologies.

24) Real time Traffic Prediction for Main Roads Western Australia (Researchers: Jaipal Singh, Kit Yan Chan, Tharam Dillon, Saghar Khadem)

The Main Roads project aims to reduce traffic congestion & improve the throughput of WA's roadways. It is currently in the early investigation stages. It aims to use cutting edge Wireless Sensor Network technologies & data mining techniques to improve our commute to work (when the road ways are most stressed). DEBII's aims to stamp its mark by saving all of our citizens’ transportation times so that we can spend more time with our families. It is our great pleasure to be assisting WA's Government organizations for the benefit of our state & our citizens.

25) Value of Information (Researchers: George Fodor, Elizabeth Chang)

Information is incurring increasingly higher costs for firms’ patents, market analysis, R&D and software which are generally expensive, essential but intangible assets that can determine the rise or fall of firms. While the cost of creating information is high, the price of information varies from being almost free on the internet to costing multi-millions of dollars for acquiring patents. Estimating the value of information is a critical input into the strategic decision making process of firms. By their very nature, strategies are developed in the presence of incomplete information. It means that progressively less-informed strategies might be developed if less information is created or, on the contrary, an exciting, information-rich environment will lead to increasingly higher value as seen with very successful firms and faculties. We call this the impact of the information spiral on strategic development. The same piece of information may have either a very high or low value depending on production volume. Existing information in organisations (traditions, tacit knowledge) may create highly valued information. The value of information depends on many factors which are notoriously hard to quantify and compare. The proposed research places the value of information within a rather general economic reference framework, which considers the role of information technology. The result is expected to have applicability for market strategy, corporate R&D planning and better understanding of the research directions in information technology.

26) Ontologies for Trust and Reputation (Researchers: Elizabeth Chang, Farookh Hussain, Tharam Dillon, Pornpit Wongthongtham)

Ontology can be viewed as a shared conceptualisation of a domain that is commonly agreed to by all parties. In this research project, we define ontological manifestations of trust and reputation. We propose a generic ontology and a specific ontology for trust and reputation. We define trust ontology, agent trust ontology, service trust ontology and product trust ontology. The relationship between these ontologies is expressed. Additionally, we define reputation ontology, agent reputation ontology, service reputation ontology and product reputation ontology. The relationship between these ontologies is expressed. An ontological manifestation is defined for reputation relationship, reputation query relationship, recommender relationship and third party trust relation.

27) Risk-Based Decision Making (Researchers: Omar Hussain, Tharam Dillon)

In the modern world, e-commerce interactions are increasingly being carried out in virtual environments, thereby increasing the possibility of fraudulent transactions due to non-compliance by either communicating party. As such, there is an increased need for developing tools which help in making informed interaction-based decisions. Risk-based Decision Support System (RDSS) is a complete and comprehensive methodology developed at DEBII for assessing, expressing and managing risk in e-commerce interactions. RDSS is the first and only one of its type for risk assessment, measurement and management in e-commerce interactions. The objective of the project is to assess, analyse the level of risk as a function of its constituents in collaborative environments, and then manage it while decision-making. In order to achieve this, RDSS develops an architectural framework to ascertain and model the level of performance risk and financial risk according to the specific characteristics of the interaction, and according to their uncertainty when determining each of them at a certain point in time in the future. The risk in the interaction is then analysed and quantified according to its constituents by using two mathematical measures; the possibility theory and fuzzy logic. The output is then presented to the user in graphical format for better understanding. Based on the analysed risk, RDSS then carries out the steps of risk management and utilises it while making informed interaction-based decisions. For risk management, RDSS utilises a mathematical approach which determines the impact of the interaction initiating the agent’s risk propensity on the determined level of risk in an interaction, based on which it recommends an interaction-based decision. It provides the user with an effective decision-making methodology in an open interaction environment. The output of the project, when applied to interactions in virtual environments, would help the interacting agent to maximise its interaction experience and expected benefits.

28) Risk Quantification (Researchers: Omar Hussain, Elizabeth Chang, Tharam Dillon)

Business interactions are the engine which drives the economy of the modern world. By business interactions, we mean the multi-disciplinary areas in that domain and which might have financial implications to either agent involved. The interactions in these domains are carried out with the aim of achieving certain specific outcomes which are consequential for the progression, advancement and sustenance for the particular business or individual. Failure to achieve those specific outcomes might have far reaching consequences to the business or individual. One of the important outcomes of the result of failure might be the risk of experiencing financial loss in such interactions. In this project, we analyse and assess this risk as an important constituent of making informed decisions within a broader perspective of business interactions. One of the areas in that domain in this project for risk quantification is the probabilistic assessment of loss in revenue generation in Demand-Driven Production. In today’s competitive world, manufacturers are constantly subjected to massive pressure to reduce their operational costs and at the same time improve or increase their production efficiency. Cost reduction relating to production is not a bad thing for the manufacturers, but doing this implies that they have to shift or to adopt a new process for producing or manufacturing their goods. One such process that the manufacturers have to adopt is to produce and deliver the consumers’ orders within a fixed timeframe in order to obtain the required revenue from the transactions. In this project, we consider processes from the manufacturers’ perspective and assess the probabilistic level of risk incurred by not fulfilling the required demand within the given period. We also look at the level of financial consequences to the manufacturer as a result of not fulfilling the required demand. The output of this project will have far-reaching consequences when applied to the manufacturing industry.

29) SLA-based Trust Model for Cloud Computing (Researchers: Mohammed Almahad, Tharam Dillon, Chen Wu)

Cloud computing has been a hot topic in the research community since 2007. In cloud computing, the online services are conducted to be pay-as-you-use. Service customers need not be in a long term contract with service providers. Service level agreements (SLAs) are agreements signed between a service provider and another party such as a service consumer, broker agent, or monitoring agent. Because cloud computing is a recent technology providing many services for critical business applications, reliable and flexible mechanisms to manage online contracts are very important. This research, presents the main criteria, which should be considered at the stage of designing the SLA in cloud computing. Also, we investigate the negotiation strategies between cloud provider and cloud consumer and propose our method to maintain the trust and reliability between each of the parties involved in the negotiation process. To validate the output of this project, significant experiments will be conducted and a reliable simulation tool will be used to test the experimental scenarios.

TCII Research Area 5:  Transport, Logistics and Supply Chain

Logistics Informatics and Industrial Informatics address the research area of business automation. As the Internet becomes more wide spread and pervasive, the vision of intelligent and collaborative industrial environments with all logistics partners, manufactures, and end-users with dynamic, agile and reconfigurable enterprise structures has become a reality. As industrial systems have become more intelligent, automated, dynamic and distributed and monitoring and control of operations has shifted towards the electronic paradigm and is increasingly carried over the Internet as is the case for sales of e-services, the need for the scientific and engineering discipline of logistics informatics and Industrial informatics has emerged. Logistics activity represents approximately 9% of Australia’s GDP - or $57 billion and it has been found that the introduction of collaborative logistic systems can achieve a 500% return on investment. Weaknesses in logistic capabilities create a multi-billion dollar cost burden on the Australian economy. Logistics and supply-chain are vital to the global economy, especially in developing countries where the 90% of logistics companies are SMEs. They provide the engine for growth of value added products and services in the supply chain marketplace and delivers substantial social and economic benefits for Australia. The Transport and Logistics Lab performs multi-disciplinary research incorporating industrial informatics, environmental engineering, ICT, business intelligence, transport economics and supply chain management.

KEY PROJECTS

30) Intelligent Airport Track-and-Trace Solutions (Researchers: Vidyasagar Potdar)

Airports need to keep track of i) baggage, and ii) airline clients prior to a plane taking off. A widely used system for keeping track of baggage from check-in to the aircraft and its transfer from an intermediate point to onward destinations is through the use of barcodes. However, such barcode systems for baggage tracking still have high error rates. This, together with the actual load experienced, can cause difficulties with baggage handling leading to several hundred, sometimes thousands, of bags being lost daily, for instance, at the new Terminal 5 at Heathrow airport. In some airports such as in Hong Kong, in order to improve the situation, RFID tags are increasingly being used to keep track of the movement and transfer of baggage. This can lead to reduced error rates and improved efficiency in the handling of greater volumes of baggage. Another problem faced by airlines is the late arrival of passengers due to board an aircraft even though they have been checked-in. If the passenger does not show up on time, it takes approximately twenty minutes for the plane to offload the baggage of the no-show passenger which causes the aircraft to lose its take-off slot. In order to cope with this issue, it would be useful to have passenger tracking and perhaps a messaging system which is based on RFID and SMS messaging. This system can also be leveraged to give the customer specific information related to his or her flight or purchasing preferences in the airport shopping complex. This project investigates the use of passive and active RFID tags for achieving intelligent baggage and passenger tracking together with an associated message and communication mechanism.

31) Smart Services (Researchers: Alex Talevski, Vidyasagar Potdar, Pedram Radmand)

The project investigates the development of a rapidly re-configurable, optimised, sense-and-respond approach for smart manufacturing services and products at multi-site production locations of the sort found in the mining, oil and gas industries. We have noted an increasing need for greater visibility of the production process, information management and decision support for operations managers. Management functions and track-and-trace facilities are required for multiple plants in order to better control and improve visibility of resources and schedules, achieve highest HR field productivity, minimise operational costs and deliver quality products and services on time. Scheduling and estimation systems are required in order to understand the time scale of an entire contract, including approximate time of the production, sub-contractors, pre-fabricated materials assembly and others as required. This project addresses the issues of communication, coordination, situation awareness, security, automated data collection in changing environments, highly flexible workflows, introduction of new techniques and components using cutting edge IT and web techniques such as Ontologies, Multi-agents, Web services, RFID technologies, network flow-based optimisation and high level Coloured Petri nets for timing constraints.

32) Tracking Lifting Gear Equipment

(Researchers: Vidyasagar Potdar, Haji Binali)

This project aims to develop an RFID solution to provide reliable identification information for lifting gear equipment used in the oil and gas industry. Equipment such as chains, shackles, slings, hooks etc. have to be certified according to Australian Standards. However, the problem that the certifying authorities face is to uniquely identify these items because it is quite easy to move items across and lose the identification information. Currently, metal tags are used to provide identification, but the problem with this approach is that data entry needs to be performed manually, which can result in data entry errors and tags can be easily switched from one certified item to another uncertified item. This project implements of High Frequency (HF) RFID tags to tag all lifting gear equipment. The key advantage of using RFID to track lifting equipment is that once RFID tags have been embedded in the equipment, they are difficult to remove and replace.

33) Virtual Collaborative Logistics (Researchers: Elizabeth Chang, Alex Talevski, Eka Guatama)

This project develops methodologies and systems to allow logistic companies to form coalitions with worldwide logistic providers. Using this approach, supply chain services and physical resources are coupled and extended beyond their typical region of operation. It involves the coupling of e-Transportation, e-Warehousing, e-Hub Connector and one-stop-shop, P2P (Partner to Partner) and B2B (Business to Business). This has the potential of allowing medium sized, regionally based, logistics providers to compete on the world stage with giant logistic providers such as UPS and FedEx. The project builds up a virtual collaborative logistics consortium that is especially suited to SME (small medium enterprise) supply chain providers, partners and alliances. The proposed IT infrastructure will allow the seamless exchange of information between the international logistics partners wherever they are and whenever it is necessary. This enables global supply chain management for all national / international clients, partners, customers, suppliers and buyers around the world to track the details and movement of goods anywhere at any time. The system also supports the need for communication and co-ordination between all partner transporters and warehouses around the world.

34) Carbon Emissions Accountability (Researchers: Valencia Lo, Vidyasagar Potdar)

Global warming is becoming a big problem and carbon emissions from a variety of sources are the cause of it. To control emission, a number of carbon emission reduction policies and schemes such as the Kyoto Protocol & COP15 treaty have been reached and put in place. Many accounting models have already been proposed in the current literature to solve the problem of responsibility ambiguity. However, the current accountability models are proposed for the general industries and not for the aviation industry. We feel that these models cannot be applied directly to the aviation industry since factors of influence are significantly different. In Aviation, it involves a mix of international and national factors such as the accountability and the implications of members and non-members of the climate change treaties in different countries. Hence, taking into account all the determinant factors and different stakeholders involved in the process of the carbon accounting, we are proposing an efficacious and fair accountability model for the aviation industry in our research. This accountability model can be used to assist Australian government in coming up with a fair tax ‘relief/subsidy’ scheme for the aviation companies for a more sustainable tourism to the country; since inclusion of aviation into the carbon reduction scheme is going to be taxing to the aviation and tourism industry growth.

35) Service Space and Semantic Service Discovery (Researchers: Chen Wu, Farookh Hussain, Hai Dong)

In this work, we have defined a new conceptual framework for an enhanced Service-Oriented Architecture (SOA) infrastructure – Service Space – with regard to service distribution and service discovery. We have explored the junction of the frontiers of several ICT disciplines: software architecture, information retrieval, distributed systems, business intelligence and SOA. The framework integrates web services, social networking and the Web 2.0 technology by conceptualising and realising a number of original web-compliant SOA architectural styles for service-oriented computing. In fact, this is the first research that integrates Web 2.0 practices into the area of web services / SOA. In particular, the concept of being able to search for an entity in order to form a coalition with it is central to the idea of the formation of Digital Ecosystems. Traditional service discovery mechanisms, which are typically non-semantic in nature, suffer from many issues, most notable of which is the imprecision of search results. Moreover, there is no method by which the quality of an entity that has been selected as a result of the search process can be ensured. In this research project, we propose a method that filters and ranks the result of search processes based on the quality of the entities retrieved as a result of the search process. The proposed system keeps track of the quality of all the entities and displays them for the user during the search-retrieval process. The search process is enabled by semantic crawlers and ontologies.

TCII Research Area 6: Health Informatics And Assistive Devices

Industrial Informatics includs Health Informatics, which focuses on the application of advanced industrial technologies within the health domain. The synergy between technoligies and health disciplines has the potential to address and solve most of the exciting issues within health domain. Through the marriage of technology and health disciplines both research areas continue to grow and advance into new directions. Value is added to the advanced technologies as they get applied to solve important health issues. Enrichment of the health domain with the latest computer techniques maximises the use and value of existing health information.

36) Travel Aids for Blind (Researcher: David Calder)

The full potential (and pitfalls) of electronic mobility or navigation aids for the blind is just beginning to be understood. There are many devices on the market, each with significant drawbacks. This DEBII researcher has invented a unique user interface which overcomes many of these problems and is capable of delivering field of view information in a novel manner, giving blind users increased freedom and confidence when walking. The manner in which the range-finding capability of the field of view sub-system functions is unique, mirroring the learned behaviour experiences of a typical blind person. The design attempts to minimise cognitive dissonance issues. The trauma of moving from one technology to another is therefore minimised. Taking the user’s background experience into account is one of the major considerations of the design - a characteristic which is often ignored by competing products. In December 2007, Dr Calder won the New Inventor Competition for the above design. Pre-Seed funding for commercialisation was secured in 2008.

Dr Calder reached round two of the 2011 WA state Innovation Awards Competition, being one of the ten ‘Early Stage Category’ chosen from a total of over eighty WA applicants. Associated advanced designs are currently being developed and a provisional patent has been secured through Wrays Patent Attorneys. Full commercialisation of a range of travel aid products is under way.

37) Multimedia Speech Therapy and Dementia Tools (Researcher: David Calder)

The original requirement specifications aimed to minimise constant speech therapist/patient supervision, particularly where time consuming repetitive tasks are involved. Therapists can use their time more effectively in planning new goals whilst the computer provides visual and sound cues to the client. Therapists no longer have to organise cue cards or sort through hundreds of icons and drawings. These were seldom in colour, whereas the computer-based system augments these traditional methods by using colour and animation. The latter was something that could not be achieved on loose pieces of paper or cardboard! Consequently, the therapy process can run more smoothly and effectively as all cues are presented on the screen and/or produced by the high quality stored speech system. Ongoing work includes the design of more intelligent system interfaces that adapt to the learned user requirements of the individual patient or user. This includes Alzheimer’s sufferers and others with dementia-related needs. Special purpose alternative input devices are under development, which allow for single/dual switch entry by the patient. This is because many stroke patients have associated motor speech and associated left limb paralysis.

38) Protein Ontology (Researchers: Amandeep Sidhu, Tharam Dillon, Elizabeth Chang)

Protein Ontology defines the basic concepts and relationships in the proteomics domain, as well as rules for combining data and information sources using these concepts and relationships. The Protein Ontology is a part of the Standardised Biomedical Ontologies available through the National Centre for Biomedical Ontologies along with Gene Ontology, Flybase, and others. The Protein Ontology is the first ontology of its kind that was proposed in 2004 for the purpose of integrating protein data and information sources on this scale.

39) Telemedicine User Interfaces (Researchers: David Calder, Vidyasagar Potdar)

The Qwerty keyboard is not available to many disabled people who nevertheless, are quite able to use computer based systems. Various input devices have already been developed and are in the process of being tuned to the requirements pertaining to particular disabilities and individuals. A large and growing aging population includes many arthritis victims, who are now unable to use the standard consumer keyboard interfaces available. They may, for example, still wish to continue communication through the Web. Various intelligent single and dual/switch options are possible for users with very little movement. Typical are eyelash movement pickup sensors. However there are significantly more advanced options for those who may appear to have no movement at all. These novel alternative input devices sense the slightest movement or even muscular electrical activity indicating user intent. These signals can be processed and then data transmitted (if necessary) from remote locations to health professionals in large city hubs.

40) Spatial Orientation Prompts (Researchers: David Calder)

The full potential of computer-based human orientation and navigation aids for the visually and cognitively impaired client is yet to be realised. There are many navigation aids available to the blind, but not for those with dementia involving short term memory loss. We are developing a range of orientation aids with unique user interfaces which overcome many current spatial orientation concerns, by giving clients increased freedom and confidence when moving in close proximity space (for the blind) and (for those with short term memory loss) remembering where they are in space at any moment in time. Also where they have been, and where they are going. These devices seek to mirror existing learned behaviour experiences of both visually and cognitively impaired persons. The designs attempt to minimise cognitive dissonance issues. The trauma of moving from one technology to another is therefore reduced. Advanced user navigation aid designs for the visually impaired are currently being commercialised and a provisional patent has been secured through Wrays Patent Attorneys. Seed funding is currently being used to develop the relevant trial prototypes.

TC-II Short Courses

TCII offers industry based short causes, in the areas of applying a set of technologies to turn industrial problems into solutions powered by data meaningful information. It addresses industrial processes and productivity, compliance and standards, data quality and governance, customer services and market positions. TCII is widely considered as outstanding experts in the field, with research, projects and awards to prove it. We share our wealth of expertise on the topic of industrial informatics with you by series of short courses and workshops. The course is offered by our leading researchers, professors and industrial experts from the around of world. TCII short Courses are as following:

1) Data Mining for Big Data Intelligence

Data mining is the act of detecting patterns from existing data repositories. Depending on the underlying data mining algorithms, the underlying or ‘hidden patterns’ the data could be detected. These underlying patterns provide useful insight to business analysts, business managers and senior business executives is forming business strategies for strategic decision-making in organizations. The aim of this course is to provide audiences with the theoretical and practical knowledge, tools and techniques of data mining, and their applications in enterprises.

Benefits

After the completion of the course audiences should have gained the necessary theoretical and practical knowledge in data mining techniques. It is highly recommended & suitable for persons who would like to make or leverage the use of data mining in their organization to gain business insight.

Contents

Topic 1 – Introduction (Knowledge Discovery and Data Mining)

Topic 2 – Data understanding (Introduce the problems that commonly occur with data in the real world. It introduces the current techniques used for data analysis.)

Topic 3 – Data Preparation (The need for different kinds of data pre-processing, and the possible techniques used for each kind are discussed. In particular, the importance and the reason for using a particular technique are given)

Topic 4 – Association Rule Mining (Introduce association rule mining and ‘Apriori’ algorithm for association rule mining is introduced)

Topic 5 – Decision Trees (Decision tree learning is described together with an overview of the existing algorithms. Advantages and disadvantages of decision tree learning and the purpose and method of decision tree pruning are discussed)

Topic 6 – Bayesian inference (Bayesian inference & the background for its use in data mining. Naive Bayes classifier and Bayesian Belief Networks (BBN’s)

Topic 7 – Neural Networks (Symbolic rule extraction, Network pruning, Generalization and learning in general).

Topic 8 – Clustering and other methods (Different clustering techniques, Algorithms used in data mining)

Topic 9 – Review of the whole data mining process (Re-establish the links within the material learned and thereby increase the understanding of the knowledge discovery process as a whole)

Topic10– Applications and case studies (Real world applications of data mining)

2) Cloud Computing and Services

Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources. Cloud services are software, platforms, infrastructure delivered by cloud computing. There has been a lot of attention in business and industry on “cloud computing and services” as a means for addressing business needs. It is widely believed that in the next decade, “cloud” will reshape the technology landscape in business. Obama stated that all US government services will be moved to the cloud. Australian government has indicated that all its public services will be available from the cloud. The aim of this course is present to audience about cloud computing and how businesses could leverage the use of cloud to address their needs.

Benefits

This workshop will provide a guide to understand and use cloud services for mission-critical business units as well as daily routines business logics. This workshop also provides design, development and deployment of cloud applications in business.

Contents

Topic 1 – Cloud computing and cloud services

Topic 2 – The differences between Service Oriented Computing, Grid and Cloud

Topic 3 – The use of cloud computing for your organization

Topic 4 – Refine business cases, drivers, strategies, and business processes for cloud

Topic 5 – Deployment models and cloud virtualization

Topic 6 – Design and develop your cloud with GoogleTM

Topic 7 – Design and develop your cloud with AmazonTM

Topic 8 – Cloud interoperability and standards

Topic 9 – Cloud security and privacy

Topic 10 – Conclusion, feedback, and future directions

3) Cyber-Physical Systems – The new Frontier

The very recent development of Cyber-Physical Systems (CPS) provides a smart infrastructure connecting abstract computational artifacts with the physical world. The solution to CPS must transcend the boundary between the cyber world and the physical world by providing integrated models addressing issues from both worlds simultaneously. This needs new theories, conceptual frameworks and engineering practice. In this paper, we set out the key requirements that must be met by CPS systems, and review and evaluate the progress that has been made in the development of theory, conceptual frameworks and practical applications. A case study of using CPS for enabling smart electricity grids within the industrial informatics field is then presented. Grand challenges to informatics posed by CPS are raised at the end.

However, building CPS is not a trivial task. It requires a new ground-breaking theory that models cyber and physical resources in a unified framework. This is a huge challenge that none of the current state-of-the-art methods are able to overcome due to the fact that computer science and control theory are independently developed based on overly-simplified assumptions of each other. For example, many key requirements (e.g. uncertainty, inaccuracy, etc.) crucial to physical systems are not captured and fully dealt with in the computer science research agenda. In a similar vein, computational complexity, system evolution and software failure are often ignored from the physical control theory viewpoint, which treats computation as a precise, error-free, static 'black-box'. The solution to CPS must transcend the boundary between the cyber world and the physical world by providing a unified infrastructure that permits integrated models addressing issues from both worlds simultaneously. This paper begins by setting out the requirements and then evaluates the progress that has been made in addressing these. This paper will show that CPS calls for new theories and some fundamental changes to the existing computing paradigm.

Benefits

This short course will provide a introduction on the foundamental of Cyber Physical Systems (CPS), including CPS requirements, referencial architectures, Cyber physical system design and implementation, and plenty industrial applications.

Contents

Topic 1 – CPS Foundation, Systems of Systems, compare with Cloud and Grid

Topic 2 – CPS, IoT and WoT, standards and architecture requirements, babric and nodes

Topic 3 – Computation, networking and physical processes

Topic 4 – Enbedded computers and networks software systems

Topic 5 – Computation and Feedback Loops

Topic 6 – Software embeeded in devices, stockastic processes

Topic 7 – Dynamic integration of the physical processes and processes of transforming data

Topic 8 – Conjoin abstractions, modelling, design and analysis

Topic 9 – CPS, IoT and WoT, standards and requirement

Topic 10– CPS in Health care, Transporation and smart energy

4) Ontology Modeling, Engineering and Evolution

Ontologies are widely used in knowledge engineering, artificial intelligence and computer science, in applications related to knowledge management, natural language processing, e-commerce, intelligent information, information retrieval, database design and integration, bio?informatics, education, software development, and in new emerging fields like the semantic web. Ontologies include richer relations between terms. These rich relations enable the expression of domain?specific knowledge, without the need to include domain?specific terms. Therefore, a true ontology should contain not only a hierarchy of concepts organised by the subsumption relation but also other ‘semantic relations’ that specify how one concept is related to another. It is important to remember that the ontologies are organised by concepts, not words. This can be helpful in recognising and avoiding potential logical ambiguities. Ontologies developed independently for different purposes will often differ greatly from each other. The main purpose of an ontology is to enable communication between computer systems to perform certain types of computations and communication. The key ingredient that makes up an ontology is a vocabulary of basic terms and a precise specification of what those terms mean.

Benefits

After this course you will grasp the principal concepts of ontologies and how ontologies and knowledge bases are related, and have gained experience in designing and developing an ontology.

Contents

Topic 1 – Ontology Definition

Topic 2 – Ontology Editor Protégé OWL

Topic 3 – Components of OWL Ontologies

Topic 4 – Ontology Fundamentals

Topic 5 – Ontology Engineering

Topic 6 – Ontology Implementation

Topic 7 – Ontology Reasoning Tools

Topic 8 – Ontology with Knowledge Base

Topic 9 – Ontology Maintenance

Topic 10– Cases studies

5) Wireless Sensor Networks for Environmental Sustainability

Wireless Sensor Networks (WSNs) has gathered a lot of interest recently, since it has opened new challenges with the development of interesting applications like surveillance, environment monitoring etc. The development of efficient protocols for WSNs communication as well as development of WSNs, motes itself has presented numerous challenges for the research community. Energy is the biggest concern for such networks and achieving high energy efficiency is of paramount importance for the longevity of the network. In order to combat this energy challenge energy efficient hardware and communication protocol design for such devices has gathered the intentions of the research community recently.

Benefits

The workshop provides an ideal opportunity for industry and academic to gain in-depth understanding of WSN and its applications.

Contents

Topic 1 – Introduction and Overview of WSNs

Topic 2 – Commercial and Scientific Applications of WSNs like vehicular emission measurement etc.

Topic 3 – WSN Architecture

Topic 4 – WSN Communication Protocol Stack

Topic 5 – WSN system requirements and challenges

Topic 6 – WSN simulation tool

Topic 7 – Quality-of-Service

Topic 8 – Cross-layer optimisation

6) Security, Trust, and Privacy

Security, trust, privacy and risk are some of the fundamentals of business from providers and consumers’ point of view. Imaging losing all of your company’s data! Worse still, imagine if your competitors got hold of your information, financials, new research and development information. Unfortunately, in today’s world we need consider high levels of protection for our information and data by securing it from predators, loss and destruction and reducing our data’s vulnerability. An organisation needs to identify its system’s strengths and weaknesses and measure its overall performance against ideals and best practice. Trust has played a central role in human relationships and has been the subject of study in many fields including business, law, social science, philosophy and psychology. It has been vital to people being able to form contracts, carry out business and work together cooperatively and underpins many forms of collaboration. Closely to this notion of trust is the concept of reputation within a community with other peers. This is frequently used as the basis for judgment as to whether to trust an individual or organisation particularly in the absence of previous direct contact. Privacy issues have been gaining attention from law makers, regulators and the media. As a result, businesses are under pressure to draft privacy policies and post them on their web sites, chief privacy officers are becoming essential members of many enterprises and companies are taking pro-active steps to avoid the potential reputation damage of a privacy mistake. As new technologies are developed, they increasingly raise privacy concerns – the World Wide Web, wireless location-based services, and RFIP chips are just a few examples.

Additionally, the recent focus on national security and fighting terrorism has brought with it new concerns about governmental intrusions on person privacy. As we all know almost each e-Commerce transaction has some undesired outcomes which the person doing it hopes they will not occur when he is undertaking that particular transaction. The quantification of undesired outcomes occurring can be termed as Risk. This also applies to the transactions in the field of e-commerce. One major characteristic of the transactions in e-Commerce is that they may be done in virtual environments. Hence the consumer generally has no opportunity to see and try the product before buying it, showing that there is a high level of Risk involved in these types of transactions according to the consumer’s point of view. In order to address these issues, this course will provide an in-depth look into security, trust, privacy and risk in the business environment, as well as related technologies and case studies.

Benefits

Attendees will gain understanding in security, trust, privacy and risk from philosophical, historical, ethical and technical perspectives.

Contents

Topic 1 -Introduction of information and data security

Topic 2 -Information and data security technologies (data inspection, data protection, data detection, reaction, and reflection)

Topic 3 -Introduction of trust and reputation

Topic 4 -Trustworthiness and reputation measurement and prediction methodologies (CCCI Metrics etc.)

Topic 5 -Introduction of privacy

Topic 6 -Privacy protection technologies and applications (online privacy protection, P3P, anonymity, pseudo-anonymity, government surveillance, privacy survey, etc.)

Topic 7 -Introduction of risk

Topic 8 -Risk measurement, prediction and management methodologies (CCAS Metrics etc.) and case studies

Topic 9 -Conclusion, feedback and future directions

7) Data Quality and Data Warehousing for Corporate Governance and Responsibility

The data quality issue has caught on in recent years, and more and more companies are attempting to cleanse the data. It is reported that 53% of companies have suffered losses due to the poor data quality, and the data quality problem costs US economy at over US $600 billion per annum. Data cleansing is the act of detecting and correcting corrupt or inaccurate records from data repositories. Data warehouse technology is designed for providing multi-dimensional analysis for decision-making. The aim of this course is to provide audiences with the theoretical and practical knowledge, tools and techniques of data quality, data cleansing, and data warehouse, and their applications in enterprises.

Benefits

  • Gain the necessary theoretical and practical knowledge in initiating.
  • Leading a data quality assessment, data cleansing and data warehouse project in their organisations.
  • Organise to move to the next agile, mobile and global era.
  • Learn how to organisations to move to the next era.

Contents

Topic 1 - Why data quality, data cleansing and data warehouse – causes, problems, and challenges for corporate governance

Topic 2 - Data quality rules

Topic 3 - Data quality assessment methods 

Topic 4 - Data cleansing issues, processes, methods, and applications 

Topic 5 - Data warehouse: an overview

Topic 6 - Planning and business requirement for a data warehouse project

Topic 7 - Application of data warehouse in corporations

Topic 8 - Conclusion, feedback, and future directions

TCII Australian Office
TCII Chair: Professor Elizabeth Chang 
E-mail:  elizabeth.chang@unsw.edu.au 
Tel: +61 0418 122 830,  +61 2626 88450

TCII Secretary: Dr. Omar Hussain
E-mail: O.Hussain@adfa.edu.au
Tel: +61 (2) 62688512

TCII Canada Office
Professor Bill Smyth
E-mail: smyth@mcmaster.ca
Tel: +1 905 523 7568, +1 905 525 9140

TCII Germany Office
Professor Achim Koduck
E-mail: Achim.Karduck@hs-furtwangen.de 
Tel: +49 7666913222

TCII Italy Office
Professor Ernesto Damiani
E-mail: ernesto.damiani@unimi.it 
Tel: +39 0373 898064

TCII Malaysia Office
Dr Vish Ramakonar
E-mail: vishram74@gmail.com 
Tel: +61 404 713 249

TCII China Office
Professor Jie Li
E-mail: liujie@fudan.edu.cn 
Tel: +86 25011243

TCII Japan Office
Associate Professor Kouji Kozaki
E-mail: kozaki@ei.sanken.osaka-u.ac.jp 
Tel: +81-6-6879-8416

TCII IT Support 
Dr Naeem Janjua
E-mail: n.janjua@unsw.edu.au 
Tel: +61 2 626 88149

Web Master
Ms. Maryam Haddad
E-mail: maryam.haddadm@gmail.com
Tel: +61 2 626 88149, +61 (2) 62688512

Admin Support
Dr Sazia Parvin
E-mail: s.parvin@unsw.edu.au 
Tel: +61 2 626 88149