Scalable way to calculate betweenness centrality for a graph in spark
I have a use-case to calculate betweenness centrality of nodes. I have tried graphx with spark-betweenness but it is a very long running job. Has anyone successfully calculated betweenness centrality of a large network with around 10 million vertices and 100 million edges?
Topic networkx graphs apache-spark social-network-analysis machine-learning
Category Data Science