Wednesday, 9 March 2016

IPA_KERBEROS_SETUP_HDP2.2

Kerberos configuration at IPA Client:
-----------------------------------------------
- check the whether krb5.conf is updated by ipa-client

- Login to Ambari (if server is not started, execute /root/start_ambari.sh) by opening http://ec2-54-172-53-173.compute-1.amazonaws.com:8080 and then
  - Admin -> Security-> click “Enable Security”
  - On "get started” page, click Next
  - On “Configure Services”, click Next to accept defaults
  - On “Create Principals and Keytabs”, click “Download CSV”. Save to sandbox by “vi /root/sanbox-principal-keytab-list.csv" and pasting the content
  - Without pressing “Apply", go back to terminal

- Edit host-principal-keytab-list.csv and and move entry containing 'rm.service.keytab' to top of the file. Also add hue and knox principal at the end, making sure no empty lines at the end
  ```
  Add to host-principal-keytab-list.csv
ec2-54-172-53-173.compute-1.amazonaws.com,Hue,hue/ec2-54-172-53-173.compute-1.amazonaws.com@AMAZONAWS.COM,hue.service.keytab,/etc/security/keytabs,hue,hadoop,400
 
On IPA Server
-------------------------------------------
  for i in `awk -F"," '/service/ {print $3}' host-principal-keytab-list.csv` ; do ipa service-add $i ; done
  ipa user-add hdfs  --first=HDFS --last=HADOOP --homedir=/var/lib/hadoop-hdfs --shell=/bin/bash
  ipa user-add ambari-qa  --first=AMBARI-QA --last=HADOOP --homedir=/home/ambari-qa --shell=/bin/bash
  ipa user-add storm  --first=STORM --last=HADOOP --homedir=/home/storm --shell=/bin/bash


On IPA_NN
-------------------------------------------
awk -F"," '/ec2-54-172-53-173.compute-1.amazonaws.com/ {print "ipa-getkeytab -s ec2-54-86-17-4.compute-1.amazonaws.com -p "$3" -k /etc/security/keytabs/"$4";chown "$6":"$7" /etc/security/keytabs/"$4";chmod "$8" /etc/security/keytabs/"$4}' host-principal-keytab-list.csv | sort -u > gen_keytabs_NN.sh
chmod +x gen_keytabs_NN.sh

mkdir -p /etc/security/keytabs/
chown root:hadoop /etc/security/keytabs/
./gen_keytabs_NN.sh
chmod 440 /etc/security/keytabs/hue.service.keytab

Copy the below keytabs to all the datanodes (i.e IPA_DN)
hdfs.headless.keytab
smokeuser.headless.keytab
storm.service.keytab

On IPA_DN
---------------------------------------------
awk -F"," '/ec2-54-173-54-193.compute-1.amazonaws.com@AMAZONAWS.COM/ {print "ipa-getkeytab -s ec2-54-86-17-4.compute-1.amazonaws.com -p "$3" -k /etc/security/keytabs/"$4";chown "$6":"$7" /etc/security/keytabs/"$4";chmod "$8" /etc/security/keytabs/"$4}' host-principal-keytab-list.csv | sort -u >> gen_keytabs_DN.sh
chmod +x gen_keytabs_DN.sh

mkdir -p /etc/security/keytabs/
chown root:hadoop /etc/security/keytabs/
./gen_keytabs_DN.sh
-----------------------------------------------

-List the ketabs
ls -la /etc/security/keytabs/*.keytab | wc -l
```

- check that keytab info can be accessed by klist
klist -ekt /etc/security/keytabs/nn.service.keytab
```

- verify you can kinit as hadoop components. This should not return any errors
@Node1
[root@IPA_NN ~]$ kinit -V -kt /etc/security/keytabs/dn.service.keytab dn/ec2-54-172-53-173.compute-1.amazonaws.com@AMAZONAWS.COM
Using default cache: /tmp/krb5cc_0
Using principal: dn/ec2-54-172-53-173.compute-1.amazonaws.com@AMAZONAWS.COM
Using keytab: /etc/security/keytabs/dn.service.keytab
Authenticated to Kerberos v5
[root@IPA_NN ~]$  kinit -V -kt /etc/security/keytabs/dn.service.keytab dn/ec2-54-173-54-193.compute-1.amazonaws.com@AMAZONAWS.COM
Using default cache: /tmp/krb5cc_0
Using principal: dn/ec2-54-173-54-193.compute-1.amazonaws.com@AMAZONAWS.COM
Using keytab: /etc/security/keytabs/dn.service.keytab
kinit: Keytab contains no suitable keys for dn/ec2-54-173-54-193.compute-1.amazonaws.com@AMAZONAWS.COM while getting initial credentials

@Node2
[root@IPA_DN ~]$ kinit -V -kt /etc/security/keytabs/dn.service.keytab dn/ec2-54-172-53-173.compute-1.amazonaws.com@AMAZONAWS.COM
Using default cache: /tmp/krb5cc_0
Using principal: dn/ec2-54-172-53-173.compute-1.amazonaws.com@AMAZONAWS.COM
Using keytab: /etc/security/keytabs/dn.service.keytab
kinit: Keytab contains no suitable keys for dn/ec2-54-172-53-173.compute-1.amazonaws.com@AMAZONAWS.COM while getting initial credentials
[root@IPA_DN ~]$  kinit -V -kt /etc/security/keytabs/dn.service.keytab dn/ec2-54-173-54-193.compute-1.amazonaws.com@AMAZONAWS.COM
Using default cache: /tmp/krb5cc_0
Using principal: dn/ec2-54-173-54-193.compute-1.amazonaws.com@AMAZONAWS.COM
Using keytab: /etc/security/keytabs/dn.service.keytab
Authenticated to Kerberos v5

[root@IPA_NN ~]$ kinit -V -kt /etc/security/keytabs/hdfs.headless.keytab hdfs@AMAZONAWS.COM
Using default cache: /tmp/krb5cc_0
Using principal: hdfs@AMAZONAWS.COM
Using keytab: /etc/security/keytabs/hdfs.headless.keytab
Authenticated to Kerberos v5


[root@IPA_DN ~]$  kinit -V -kt /etc/security/keytabs/hdfs.headless.keytab hdfs@AMAZONAWS.COM
Using default cache: /tmp/krb5cc_0
Using principal: hdfs@AMAZONAWS.COM
Using keytab: /etc/security/keytabs/hdfs.headless.keytab
Authenticated to Kerberos v5
```
- Click Apply in Ambari to enable security and restart all the components

If the wizard errors out towards the end due to a component not starting up, its not a problem: you should be able to start it up manually via Ambari

```
- Install Hue
```

1. Modify/Add the below mention properties.

Ambari-->HDFS-->Config-->hdfs-site

<property>
  <name>dfs.webhdfs.enabled</name>
  <value>true</value>
</property>
Modify the core-site.xml file.

Ambari-->HDFS-->Config-->core-site

<property>
  <name>hadoop.proxyuser.hue.hosts</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.hue.groups</name>
  <value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hcat.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hcat.hosts</name>
<value>*</value>
</property>

Ambari-->Hive-->Config-->webhcat-site

<property>
  <name>webhcat.proxyuser.hue.hosts</name>  
  <value>*</value>
</property>
<property>  
  <name>webhcat.proxyuser.hue.groups</name>
  <value>*</value>
</property>

Ambari-->Oozie-->Config-->oozie-site

<property>
  <name>oozie.service.ProxyUserService.proxyuser.hue.hosts</name>
  <value>*</value>
</property>
<property>  
  <name>oozie.service.ProxyUserService.proxyuser.hue.groups</name>
  <value>*</value>
</property>

2. restart the all services (HDFS, MapReduce,Yarn ,Oozie and Hive )

3. installing hue
yum install hue

4. Changes in hue.ini
vi /etc/hue/conf/hue.ini

 # Webserver listens on this address and port
  http_host=ec2-54-172-53-173.compute-1.amazonaws.com
  http_port=8888

 [[hdfs_clusters]]

    [[[default]]]
      # Enter the filesystem uri
      fs_defaultfs=hdfs:/ec2-54-172-53-173.compute-1.amazonaws.com:8020

      # Use WebHdfs/HttpFs as the communication mechanism. To fallback to
      # using the Thrift plugin (used in Hue 1.x), this must be uncommented
      # and explicitly set to the empty value.
      webhdfs_url=http://ec2-54-172-53-173.compute-1.amazonaws.com:50070/webhdfs/v1/

       security_enabled=true

  [[yarn_clusters]]

    [[[default]]]
      # Whether to submit jobs to this cluster
      submit_to=true

       security_enabled=true

      # Resource Manager logical name (required for HA)
      ## logical_name=

      # URL of the ResourceManager webapp address (yarn.resourcemanager.webapp.address)
      resourcemanager_api_url=http://ec2-54-172-53-173.compute-1.amazonaws.com:8088

      # URL of Yarn RPC adress (yarn.resourcemanager.address)
      resourcemanager_rpc_url=http://ec2-54-172-53-173.compute-1.amazonaws.com:8050

      # URL of the ProxyServer API
      proxy_api_url=http://ec2-54-172-53-173.compute-1.amazonaws.com:8088

      # URL of the HistoryServer API
      history_server_api_url=http://ec2-54-172-53-173.compute-1.amazonaws.com:19888

      # URL of the NodeManager API
      node_manager_api_url=http://ec2-54-172-53-173.compute-1.amazonaws.com:8042

  [liboozie]
  # The URL where the Oozie service runs on. This is required in order for
  # users to submit jobs.
  oozie_url=http://ec2-54-173-54-193.compute-1.amazonaws.com:11000/oozie

  security_enabled=true

  [beeswax]

  # Host where Hive server Thrift daemon is running.
  # If Kerberos security is enabled, use fully-qualified domain name (FQDN).
  hive_server_host=ec2-54-172-53-173.compute-1.amazonaws.com

  # Port where HiveServer2 Thrift server runs on.
  hive_server_port=10000
 
  [hcatalog]
  templeton_url=http://ec2-54-172-53-173.compute-1.amazonaws.com:50111/templeton/v1/
  security_enabled=true


5. Hue config changes needed to make Hue work on a LDAP-enbled, kerborized cluster

Goals:

Kerberos enable Hue and integrate it with FreeIPAs directory
Now that kerberos has been enabled on the sandbox VM and LDAP has also been setup, we can configure Hue to for this configuration

Edit the kerberos principal to hadoop user mapping to add Hue Under Ambari > HDFS > Configs > hadoop.security.auth_to_local, add hue entry below above DEFAULT. If the other entries are missing, add them too:

        RULE:[2:$1@$0]([rn]m@.*)s/.*/yarn/
        RULE:[2:$1@$0](jhs@.*)s/.*/mapred/
        RULE:[2:$1@$0]([nd]n@.*)s/.*/hdfs/
        RULE:[2:$1@$0](hm@.*)s/.*/hbase/
        RULE:[2:$1@$0](rs@.*)s/.*/hbase/
        RULE:[2:$1@$0](hue/ec2-54-172-53-173.compute-1.amazonaws.com@.*AMAZONAWS.COM)s/.*/hue/      
        DEFAULT      
allow hive to impersonate users from whichever LDAP groups you choose
hadoop.proxyuser.hive.groups = users, sales, legal, admins

( note : * for all user group)
restart HDFS via Ambari


Edit /etc/hue/conf/hue.ini by uncommenting/changing properties to make it kerberos aware
NOTE: Update the below properties in their respective sections/blocks.
Change all instances of "security_enabled" to true

 [[kerberos]]

    # Path to Hue's Kerberos keytab file
     hue_keytab=/etc/security/keytabs/hue.service.keytab

    # Kerberos principal name for Hue
     hue_principal=hue/ec2-54-172-53-173.compute-1.amazonaws.com@AMAZONAWS.COM

    # Path to kinit
     kinit_path=/usr/bin/kinit

    ## Frequency in seconds with which Hue will renew its keytab. Default 1h.
     reinit_frequency=3600

    ## Path to keep Kerberos credentials cached.
     ccache_path=/tmp/hue_krb5_ccache
 



Make changes to /etc/hue/conf/hue.ini to set backend to LDAP:
NOTE: Update the below properties in their respective sections/blocks.
backend=desktop.auth.backend.LdapBackend
pam_service=login
base_dn="DC=amazonaws,DC=com"
ldap_url=ldap://ec2-54-86-17-4.compute-1.amazonaws.com
ldap_username_pattern="uid=<username>,cn=users,cn=accounts,dc=amazonaws,dc=com"
create_users_on_login=true
user_filter="objectclass=person"
user_name_attr=uid
group_filter="objectclass=*"
group_name_attr=cn
Restart Hue
````

- Access HDFS as Hue user
```
su - hue
#Create a kerberos ticket for the user
kinit -kt /etc/security/keytabs/hue.service.keytab hue/ec2-54-172-53-173.compute-1.amazonaws.com@AMAZONAWS.COM
#verify that hue user can now get ticket and can access HDFS
klist
hadoop fs -ls /user
should get the results
exit
```````````

NOTE: for the error
Fail: Execution of 'hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` /app-logs /mapred /mapred/system /mr-history/tmp /mr-history/done && hadoop --config /etc/hadoop/conf fs -chmod -R 777 /app-logs && hadoop --config /etc/hadoop/conf fs -chmod  777 /mr-history/tmp && hadoop --config /etc/hadoop/conf fs -chmod  1777 /mr-history/done && hadoop --config /etc/hadoop/conf fs -chown  mapred /mapred && hadoop --config /etc/hadoop/conf fs -chown  hdfs /mapred/system && hadoop --config /etc/hadoop/conf fs -chown  yarn:hadoop /app-logs && hadoop --config /etc/hadoop/conf fs -chown  mapred:hadoop /mr-history/tmp /mr-history/done' returned 1. 15/03/12 08:25:56 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
mkdir: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "ec2-54-172-53-173.compute-1.amazonaws.com/172.31.8.33"; destination host is: "ec2-54-172-53-173.compute-1.amazonaws.com":8020;

Solution:
Download Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files JDK7
http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.html

copy the local_policy.jar and US_export_policy.jar to $JAVA_HOME/jre/lib/security/

No comments:

Post a Comment