Skip to content

Instantly share code, notes, and snippets.

@oxo42
Created June 30, 2016 10:40
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save oxo42/5628c4c67d1067c5bd41c514411b7d35 to your computer and use it in GitHub Desktop.
Save oxo42/5628c4c67d1067c5bd41c514411b7d35 to your computer and use it in GitHub Desktop.
Migrate Splunk users from LDAP to SSO
#!/bin/bash
splunk_home=/opt/splunk/etc
my_users=$splunk_home/apps/my_domain/lookups/my_users.csv
users=$splunk_home/users
authfile=$splunk_home/new-auths.txt
# Clear the auth file
: > $authfile
csvcut -c sAMAccountName,userPrincipalName $my_users | while IFS=, read username mail
do
# lowercase the username
sAMAccountName=${username,,}
# check if user exists
if [[ -d "$users/$sAMAccountName" && $mail != *".local" ]] ; then
# move $sAMAccountName to $mail
echo Moving $users/$sAMAccountName to $users/$mail
mv $users/$sAMAccountName $users/$mail
# Check if the user exists in any meta files
for meta in $(grep -rl $sAMAccountName $splunk_home | egrep '\.meta$') ; do
echo "In $meta, changing owner from $sAMAccountName to $mail"
sed -i "s/$sAMAccountName/$mail/g" $meta
echo "$mail = user" >> $authfile
done
fi
done
cat $authfile | uniq > "uniq-$authfile"
@Hodgegoblin
Copy link

Thank you, you made my day today for not having to write this up myself.

@Hodgegoblin
Copy link

Recommend changing line 21 to grep for "$sAMAccountName$" and 23 "s/$sAMAccountName$/$mail/g" to have the regex $ to match end of line.

We had already migrated one of our servers over before we discovered the different usernames so objects owned by users with $mail already existed. This resulted in ownership being changed to "user@domain.com@domain.com" hah!

@sanatani806
Copy link

glad to find this migration script as well.

@Hodgegoblin How did the migration go for you.

Did you run into any issues related to users no longer being able to find their private knowledge objects or app level shared artifacts after logging in using SAML/single sign ON.

Also did you implement this on clustered env or standalone?

Would be great if you are able to share any learnings, we are planning to implement same soon.

@Hodgegoblin
Copy link

Hodgegoblin commented Sep 8, 2022

glad to find this migration script as well.

@Hodgegoblin How did the migration go for you.

Did you run into any issues related to users no longer being able to find their private knowledge objects or app level shared artifacts after logging in using SAML/single sign ON.

Also did you implement this on clustered env or standalone?

Would be great if you are able to share any learnings, we are planning to implement same soon.

@sanatani806

We have multiple standalone seachheads with clustered indexers. Our indexers use local Splunk auth because SSO doesn't work on CLI so all migrations were on standalone servers. The migration on our cluster manager, deployment server and searchheads went flawlessly with the help of this script. Although we had to edit the regex in line 21 and 23 because one of our servers was migrated to SSO before running the script so some objects were already owned by username@domain.com and running it changed to username@domain.com@domain.com.

We had LDAP as our authentication method prior to migration to SSO and used this SPL to export our existing users to the CSV to feed into the script. Note: Our UPN and email are different (-shrug- I tried but no one listened) so we are appending our domain to the SAMAccountName to create our UPN. Your environment may be different.

| rest /services/authentication/users splunk_server=local 
| search type=LDAP 
| rename title as sAMAccountName
| eval userPrincipalName=sAMAccountName+"@domain.com"
| table sAMAccountName, userPrincipalName

We had one seachhead complain about orphaned knowledge objects immediately post-migration for one user. We had that user login to the search head and it cleared up everything. Not sure if it was just a timing issue with some internal Splunk at startup that may have self corrected, but only one user on one server was observed having orphaned knowledge objects after the migrations.

@sanatani806
Copy link

@Hodgegoblin Thanks alot for sharing the details of your implementation. Glad to know it went smooth.

Few more questions

  1. Did you add the list of users which were identified by the script line number 29 to the authentication.conf, if yes did you just add the "user" role to the user or you assigned all the expected roles.
    example
    micheal has access to all the 3 roles -->role1,role2,rol23

a) micheal@abc.com=user
OR
b) micheal@abc.com=role1,role2,role3.

  1. After the script is run, did you restart the splunk instances?

Reason i ask, right now we integrated one dev instance (which was not connected to LDAP) to SAML, there is one native user who has couple of private objects and app level shared objects. I ran the script for this user and now all the artifacts for this user uses his email addresses (private objects and app level shared objects i.e in local.meta). User is yet to login , however i still see in splunk all the artifacts are still showing old native user as owner. I will have user log in tomorrow needs to see how it behaves.

@sanatani806
Copy link

did a debug/refresh, this took care of #2 point which i asked.

@Hodgegoblin
Copy link

@sanatani806

  1. We did not add the users to authentication.conf. This could have caused the issue with the orphaned knowledge object for us, but after a user logs in they're mapped to the roles in there. We mapped groups to our Splunk roles and then the users get mapped to the correct roles based on group membership.

  2. We did restart the Splunk instances after running the script.

@vishalgugale
Copy link

vishalgugale commented Nov 29, 2023

@Hodgegoblin @oxo42 @sanatani806 Thanks for the script and discussion above. I am doing LDAP to SAML migration and was testing this script on our dev server but when I have executed the script and tried to restart Splunk service got below in splunkd.log and then Splunk service does not start. have executed the script when splunk was in running sate. did you come across this similar issue?

INFO ConfigWatcher [83579 SplunkConfigChangeWatcherThread] - File deleted while splunkd was not running path=/opt/splunk/etc/users/user1@test.com\r/corp_digital_TA_css/local/ui-prefs.conf DELETED

also directories moved from user1 to user1@a.com created like below with "?" at the end. I have checked the csv lookup and I dont see any extra character in email field .

drwx------. 14 splunk splunk 278 Nov 29 13:46 user1@test.com?
drwx------. 5 splunk splunk 78 Nov 29 13:46 user2@test.com?
drwx------. 9 splunk splunk 166 Nov 29 13:46 user3@test.com?

@vishalgugale
Copy link

@Hodgegoblin @sanatani806 @oxo42 is anyone of you actively checking this post

I am getting below error and not sure exactly how i can get rid of it. only thing for now i am doing is instead of doing mv and using cp to copy users directories.

INFO ConfigWatcher [83579 SplunkConfigChangeWatcherThread] - File deleted while splunkd was not running path=/opt/splunk/etc/users/user1@test.com\r/corp_digital_TA_css/local/ui-prefs.conf DELETED

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment