scala - How to use orderby() with descending order in Spark window functions? -


i need window function partitions keys (=column names), orders column name , returns rows top x ranks.

this works fine ascending order:

def gettopx(df: dataframe, top_x: string, top_key: string, top_value:string): dataframe ={     val top_keys: list[string] = top_key.split(", ").map(_.trim).tolist     val w = window.partitionby(top_keys(1),top_keys.drop(1):_*)        .orderby(top_value)     val rankcondition = "rn < "+top_x.tostring     val dftop = df.withcolumn("rn",row_number().over(w))       .where(rankcondition).drop("rn")   return dftop } 

but when try change orderby(desc(top_value)) or orderby(top_value.desc) in line 4, syntax error. what's correct syntax here?

there 2 versions of orderby, 1 works strings , 1 works column objects (api). code using first version, not allow changing sort order. need switch column version , call desc method, e.g., mycol.desc.

now, api design territory. advantage of passing column parameters have lot more flexibility, e.g., can use expressions, etc. if want maintain api takes in string opposed column, need convert string column. there number of ways , easiest use org.apache.spark.sql.functions.col(mycolname).

putting together, get

.orderby(org.apache.spark.sql.functions.col(top_value).desc) 

Comments

Popular posts from this blog

jOOQ update returning clause with Oracle -

java - Warning equals/hashCode on @Data annotation lombok with inheritance -

java - BasicPathUsageException: Cannot join to attribute of basic type -