Network landscape of recent time contains many different network technologies, a wide range of end-devices with a large scale of capabilities and power, and an immense quantity of information and data represented in different formats. According to CISCO IBSG, (April 2011) by 2020 approximately 50 billion connected devices will be used by approximately 7.6 billion of world population, which means 6 to 8 connected device per person (Figure 1) . We have so many different ways to communicate to each other that it becomes a complex task to maintain all these different systems. Additionally new methods to communicate not only with human beings but also with machines have arisen. This has included a range of applications from simple Web-based software up to the voice-controlled household. As we know, the speed of development brings benefits together with problems. Two main issues can be identified: on the one hand, a user uses several devices to use the different methods of communication, and on the other hand, the Content and User Interfaces are big issues because they could include different kinds of data formats such as text, image, audio, video, 3D Virtual Reality data and other upcoming formats. A very suitable and useful example of the use of such a system is mobile learning because of the large amount of varying devices with significantly different features and functionalities. This is true not only to support different learners, e.g. all learners within one learning community, but also to support the same learner using different equipment in parallel or at different times. These applications may be significantly enhanced by including virtual reality content presentation.
Whatever the purposes are, it is impossible to develop and adapt content for all kinds of devices including those that are mobile individually due to different capabilities of the devices, cost issues and the author’s requirement. A solution needs to be found to enable the automation of the content adaptation process. In order to realize such a system three major requirements have to be fulfilled a) Identification of the connected device b) Generation, structuring and storage of generalized content and c) A transformation process from generalized content to optimized and device dependent content. This idea is implemented in a helping tool for language acquisition for adult learners named Language Learning Game (LLG).
According to Critical Period Hypothesis (CPH) after a certain age it is not easy for most people to learn a new language. Moreover language acquisition needs interaction with others to practice it often which is very difficult for an adult due to other responsibilities in social and professional life. In this regard an adult learner needs a helping tool which is always available with him that he may learn and interact with others any time anywhere he wants. Nobody is carrying a high-end device like a laptop all the time and will not be motivated to open it somewhere for learning purpose just for couple of minutes. The only device that appears to be a potential solution to those problems is a small mobile device like a mobile phone, which is always switched-on and everybody is carrying it everywhere with himself.
LLG is an example of a suitable tool for adult learners because it is a device independent application where different kinds of devices and data formats are presented for collaborative learning . The main idea of this game is to create a short story in a foreign language by exploiting mobile devices. The story is developed by a group of participants by exchanging sentences/data. This way the participants can learn from each other by sharing their individual knowledge without the need of a constant support from a tutor and without the fear of making mistakes , because the group members are anonymous
HISTORY AND STATE OF THE ART
There already are systems that enable a communication independent of the underlying media. The first of its kind was the mobile people architecture (MPA)  developed by the Mosquito Group of the Stanford University. They implemented the idea of so called personal proxy by extending the traditional ISO-OSI reference model by a personal layer. This so-called personal level routing is the main achievement of the MPA, but it brings one bottle-neck namely the personal proxy. All the communication has to go through the proxy although there is alternative, faster way. This problem is solved in the iceberg architecture . Even though the system is based on the MPA, it has one major difference: it no longer has decentralized proxies for every user but concentrates many proxies in centralized units called iceberg points of presence (IPoPs). These IPoPs have interfaces to many access networks (e.g., telephony, cellular, and Internet) and are interconnected by fast network connections. This ensures that all communication can be routed in a fast and direct way. The iceberg architecture is therefore a highly developed system that enables device independent communication but it does not support the user beyond this functionality and is limited to communication services. Later on the integrated personal mobility architecture (IPMoA)  takes a slightly different approach to ensure the device independency and the mobility of the user. It does not primarily focus on the reachability but on the mobility of the users. The users are able to access all their data and applications from every remote location and with every available device. By including communication applications, a device independent communication is possible. The whole system is based on agents that commute between the home and the foreign network and exchange the data between those networks. A high level of personalization can thereby be reached, but since nearly all data must be fetched from the home network, there may be problems for instance with time-sensitive and synchronous applications. ICAT (Identification, Classification, Adaptation and Tagged XML) developed in University of Applied Sciences Offenburg (2010) for the first time combined the idea of device and content independency . But the concept is dependent upon UAProf (User Agent Profile) which is not supported by all devices such as new Windows mobile devices, iDen handsets and legacy handsets. In 2006 a doctoral thesis  was conducted by Dr. Feisst to make 3D data visible in mobile handsets but the concept was not possible to prove in a device independent way. Moreover due to less processing power of the mobile handsets the system was dependent upon proxy server.
The goal of this ongoing research is to provide a freedom to content providers and users of the system by supporting any kind of content format and any kind of communicative device, independent of the purpose of the communication. For example the system could be equally used for distance learning or mobile advertising. Since support for as many different devices and data formats as possible was desired, committing to one technique or direction was not an option. The integration of the recognition of all the available devices and to support all kinds of existing and other upcoming data formats is not possible due to the time limitation. This is why only some example systems will be integrated in the prototype, but both the steps such as device recognition and content translation are done independently to make it easier to extend in future.
To have an idea of the system at first it is necessary to understand how the game works, how the device identification is taking place, necessity of generalized content format, and how and when the content adaptation is done. Figure 2 depicts an overview of the system.
Description of the game
The LLG was developed to allow language learners to practice the target language using a mobile device. It is not designed for beginning language learners, but rather for people with a basic knowledge of the desired foreign language. The game requires a supervisor who has very good knowledge of the foreign language to perform an overall check at the very end.
The instructor or supervisor will create the game and provide instructions; decide about the total number of group members in each group and the game end condition. In this stage the instructor is the learning content provider or author and he may use any kind of data format as per necessity or availability or convenience, for example instruction could contain text, picture, audio, video, 3D image or a virtual reality scene or a combination of different data formats. Group members will be collected randomly. Each group should contain no more than three to five members. Now it is not possible to say which member is going to use which kind of device; moreover each user may use different kind of device in different time or parallel. Participants will write sentences and the aim is to build a short story in the desired language. When a group member writes and sends a sentence the other members of the group will have a chance to dispute by proposing another version of the sentence or they can simply agree with it. They are only allowed by the rule of the game, to change spelling or grammatical mistakes in their proposed version. Afterwards all group members will receive the proposed sentence and possible corrected versions. At this stage every group member has to decide at which version he agrees and vote for that one. The sentence with the most votes gets elected. In case of equal votes the first submitted one wins. A group member has to create one sentence in one round. Typically the game is finished after fifteen to twenty rounds. The final short story is sent to all group members as well as to a supervisor who will perform an overall correction. After that all the participants will receive corrections of all the mistakes done by them while playing the game made by the supervisor. This overall correction is necessary because there might be a situation when the majority of members of a group agree with a wrong sentence. At the end the correction made by the supervisor will help them to learn correctly .
As mentioned before this game is an example of mobile and distance learning which should be able to be played by using any kind of device for example desktop computer or mobile phone. So the very first responsibility of the system is to identify the end user’s device along with device capabilities as soon as the user logs in to the system. It is a basic right of the user to be able to see the instructions of the game on their used devices to be able to play the game. There might be a necessity of transforming the instruction based on the end user’s device capabilities.
Process of device identification
Non commercial version of Wireless Universal Resource File (WURFL)  was selected for the description of the features of mobile devices and browsers, because the WURFL model is an XML configuration file that contains information about the capabilities and features of many mobile devices in the wireless world. Our system works with a combination of WURFL and a local Object Oriented Database . Because of most suitability with the project requirement DB4O was chosen as object oriented database (figure 2).
Process of generating and structuring generalized content
A device independent system is able to deliver content to any device in such a way that the received content can be presented. This task can be approached in two following ways: either content for every device exists in the system, which is very time, cost consuming and labor intensive, or the system is able to adapt content for each device. In case the system is capable of adapting content to a device dependent presentation, the content has to be available in the system in a generalized form. Additionally, such a system should support a device independent authoring process where the author can focus on the content generation and not on device dependent content adaptation, for example while creating game providing instruction in LLG. XML has been chosen for generation, structuring and storage of generalized content . For 3D data .dae format is used due to compatibility with maximum number most used other 3D data formats and .dae is extended from XML .
Figure 3 below shows an example of generalized 3D content (.dae) converted from four different 3D data formats
Process of content adaptation
Tim Berners-Lee creator of WWW stated “The mobile web initiative is important – information must be made seamlessly available on any device”. In order to optimize the content presentation on different devices or in other words “any device”, the generalized content has to be adapted or translated in a device-dependent manner. W3C in W3C–MBP  has categorized three approaches where the adaptation is taking place: (1.) client-side, (2.) server-side, and (3.) proxy-side. It is intended to use any of those three or a combination of the approaches, based on necessity. For example, for a 3D picture or a virtual-reality scene as a learning content, it is advantageous to use a proxy based approach . It is because back in 2006 mobile devices specially phones did not have enough processing power, internet facility was limited and expensive. Moreover there was no support for 3D data formats. If the client device is not suffering from any of the mentioned problems and already has required processing power and capability to open corresponding 3D data format, content adaptation will take place in client side directly instead of proxy side. An example of client side adaptation of 3D data is shown below in figure 4.
Device detection, generation, structure and storage of generalized content and translate and transfer of the generalized content according to detected device’s capabilities is successfully working with text and .jpg data format in platform type Connected Limited Device Configuration (CLDC)/Mobile Information Device Profile (MIDP). CLDC – 1.1 and MIDP – 2.0 was used with some optional packages like Mobile 3D Graphics 1,1, Wireless Messaging 2,0 etc. So far the proposed architecture was tested with Nokia N95, Sony Ericsson P1i, Sony Xperia S, iPad4, Nexus 7 and iPhone 5 with rich text, jpeg and bmp data formats.
Moving forward, the system has to be improved and tested with other possible devices and data formats, especially with 3D data and VR scene. Not only different devices powered by same operating system but also different operating system such as Windows Smartphone/Pocket PC, Linux, BlackBerry OS, etc and different data formats specially for 3D data such as obj, vrml, 3ds, iges etc. The experimental validation could be conducted between 2 or 3 Virtual Reality Systems including 3D image by polarization separation on large size and cellular phone including 3D capacities with micro lens filter or other technology. Mainly our focus is to use polygonal modeling instead of Non-uniform rational basis spline (NURBS) due to the comparative easiness of dealing with the data .
The realisation of a device and content independent system architecture which could be equally useful for learning and business purposes has to take into account different devices with various capabilities along with heterogeneity of content in order to produce an optimised content presentation. It is highly expected that successful implementation of the complete system will open a door to welcome huge success of mobile learning, digital 3D image sharing and mobile advertising independent of time, place and social status of the user.