-
Notifications
You must be signed in to change notification settings - Fork 14
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
5 changed files
with
55 additions
and
31 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,21 +1,46 @@ | ||
#Cosmos | ||
#<a name="top"></a>Cosmos | ||
[](https://opensource.org/licenses/AGPL-3.0) | ||
[](http://fiware-cosmos.readthedocs.org/en/latest/?badge=latest) | ||
|
||
This project is part of [FIWARE](http://fiware.org). | ||
|
||
[Cosmos](http://catalogue.fiware.org/enablers/bigdata-analysis-cosmos) is the code name for the Reference Implementation of the BigData Generic Enabler of FIWARE. | ||
|
||
Cosmos comprises several different sub-projects: | ||
[Cosmos](http://catalogue.fiware.org/enablers/bigdata-analysis-cosmos) is the code name for the Reference Implementation of the BigData Generic Enabler of FIWARE, a set of tools and developments helping in the task of enabling a Hadoop as a Service (HasS) deployment: | ||
|
||
* A set of administration tools such as HDFS data copiers and much more, under [cosmos-admin](./cosmos-admin) folder. | ||
* An OAuth2 tokens generator, under [cosmos-auth](./cosmos-auth) folder. | ||
* A web portal for users and accounts management, running MapReduce jobs and doing I/O of big data, under [cosmos-gui](./cosmos-gui) folder. | ||
* A custom authentication provider for Hive, under [cosmos-hive-auth-provider](./cosmos-hive-auth-provider). | ||
* A REST API for running MapReduce jobs in a shared Hadoop cluster, under [cosmos-tidoop-api](./cosmos-tidoop-api). | ||
* A specific OAuth2-base proxy for Http/REST operations [cosmos-proxy](./cosmos-proxy). | ||
* A specific OAuth2-base proxy for Http/REST operations, under [cosmos-proxy](./cosmos-proxy). | ||
|
||
[Top](#top) | ||
|
||
##If you want to use Cosmos Global Instance in FIWARE Lab | ||
If you are looking for information regarding the specific deployment of Cosmos Global Instance in FIWARE Lab, a HaaS ready to use, please check this documentation: | ||
|
||
* [Quick Start Guide](./doc/manuals/quick_start_guide_new.md) for Cosmos users. | ||
* Details on using [OAuth2 tokens](./doc/manuals/user_and_programer_manual/using_oauth2.md) as authentication and authorization mechanism. | ||
* Details on using [WebHDFS](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html) REST API for data I/O (you can also check [this](./doc/manuals/user_and_programer_manual/data_management_and_io.md) link). | ||
* Details on using [Tidoop](./doc/manuals/user_and_programer_manual/using_tidoop.md) REST API for MapReduce job submission. | ||
* Details on developing [MapReduce jobs and Hive clients](./doc/manuals/user_and_programer_manual/using_hadoop_and_ecosystem.md) (Already developed Hive clients can also be found [here](./resources/hiveclients/)). | ||
* In general, you may be insterested in the [User and Programming Guide](./doc/manuals/user_and_programer_manual), also available in [readthedocs](http://fiware-cosmos.readthedocs.io/en/latest/). | ||
|
||
[Top](#top) | ||
|
||
##If you want to deploy and use your own private Hadoop instance | ||
This is the case you don't rely on the Global Instance of Cosmos in FIWARE Lab. In this case, you'll have to install, configure and manage your own Hadoop private instance. The Internet is plenty of documentation that will help you. | ||
|
||
##<a name="contact"></a>Reporting issues and contact information | ||
[Top](#top) | ||
|
||
##If you want to deploy your own public Cosmos instance | ||
In the (extremly rare) case you are not interested in using the Global Instance of Cosmos or a private instance of Hadoop, but you want to become a Big Data service provider, and you want to base on Cosmos software, you may be interested in the following links: | ||
|
||
* [Deployment details](doc/deployment_examples/cosmos/fiware_lab.md) for administrators trying to replicate Cosmos Global Instance in FIWARE Lab. | ||
* In general, you may be insterested in the [Installation and Administration Guide](./doc/manuals/installation_and_administration_manual), also available in [readthedocs](http://fiware-cosmos.readthedocs.io/en/latest/). | ||
|
||
[Top](#top) | ||
|
||
##Reporting issues and contact information | ||
There are several channels suited for reporting issues and asking for doubts in general. Each one depends on the nature of the question: | ||
|
||
* Use [stackoverflow.com](http://stackoverflow.com) for specific questions about this software. Typically, these will be related to installation problems, errors and bugs. Development questions when forking the code are welcome as well. Use the `fiware-cosmos` tag. | ||
|
@@ -27,3 +52,5 @@ There are several channels suited for reporting issues and asking for doubts in | |
* [[email protected]]([email protected]) **[Contributor]** | ||
|
||
**NOTE**: Please try to avoid personally emailing the contributors unless they ask for it. In fact, if you send a private email you will probably receive an automatic response enforcing you to use [stackoverflow.com](http://stackoverflow.com) or [ask.fiware.org](https://ask.fiware.org/questions/). This is because using the mentioned methods will create a public database of knowledge that can be useful for future users; private email is just private and cannot be shared. | ||
|
||
[Top](#top) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,10 +1,10 @@ | ||
#Tidoop REST API | ||
cosmos-tidoop-api exposes a RESTful API for running MapReduce jobs in a shared Hadoop environment. | ||
|
||
Why emphasize in <i>a shared Hadoop environment</i>? Because shared Hadoops require special management of the data and the analysis processes being run (storage and computation). There are tools like [Oozie](https://oozie.apache.org/) in charge of running MapReduce jobs as well through an API, but they do not take into account the access to the run jobs, their status, results, etc must be controlled. In other words, using Oozie any user may kill a job by knowing its ID; using cosmos-tidoop-api only the owner of the job will be able to. | ||
Please observe we emphasize in <i>a shared Hadoop environment</i>. This is because shared Hadoops require special management of the data and the analysis processes being run (storage and computation). There are tools like [Oozie](https://oozie.apache.org/) in charge of running MapReduce jobs as well through an API, but they do not take into account the access to the run jobs, their status, results, etc must be controlled. In other words, using Oozie any user may kill a job by knowing its ID; using cosmos-tidoop-api only the owner of the job will be able to. | ||
|
||
The key point is to relate all the MapReduce operations (run, kill, retrieve status, etc) to the user space in HDFS. This way, simple but effective authorization policies can be stablished per user space (in the most basic approach, allowing only a user to access it own user space). This can be easily combined with authentication mechanisms such as [OAuth2](http://oauth.net/2/). | ||
|
||
Finally, it is important to remark cosmos-tidoop is being designed to run in a computing cluster, but in charge of analyzing the data within a storage cluster. Sometimes, of course, both storage and computing cluster may be the same, splitted software is ready for that. | ||
Finally, it is important to remark cosmos-tidoop-api is being designed to run in a computing cluster, in charge of analyzing the data within a storage cluster. Sometimes, of course, both storage and computing cluster may be the same; even in that case the software is ready for that. | ||
|
||
Further information can be found in the documentation at [fiware-cosmos.readthedocs.io](http://fiware-cosmos.readthedocs.io/en/latest/). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters