Global web icon
stackoverflow.com
https://stackoverflow.com/questions/66685638/datab…
Databricks - Download a dbfs:/FileStore file to my Local Machine
Method3: Using third-party tool named DBFS Explorer DBFS Explorer was created as a quick way to upload and download files to the Databricks filesystem (DBFS). This will work with both AWS and Azure instances of Databricks. You will need to create a bearer token in the web interface in order to connect.
Global web icon
stackoverflow.com
https://stackoverflow.com/questions/59107489/datab…
databricks: writing spark dataframe directly to excel
Are there any method to write spark dataframe directly to xls/xlsx format ???? Most of the example in the web showing there is example for panda dataframes. but I would like to use spark datafr...
Global web icon
stackoverflow.com
https://stackoverflow.com/questions/66143853/how-t…
databricks - How to get the cluster's JDBC/ODBC parameters ...
Databricks documentation shows how get the cluster's hostname, port, HTTP path, and JDBC URL parameters from the JDBC/ODBC tab in the UI. See image: (source: databricks.com) Is there a way to get ...
Global web icon
stackoverflow.com
https://stackoverflow.com/questions/69925461/print…
Printing secret value in Databricks - Stack Overflow
2 Building on @camo's answer, since you're looking to use the secret value outside Databricks, you can use the Databricks Python SDK to fetch the bytes representation of the secret value, then decode and print locally (or on any compute resource outside of Databricks).
Global web icon
stackoverflow.com
https://stackoverflow.com/questions/53523560/datab…
Databricks: How do I get path of current notebook?
Databricks is smart and all, but how do you identify the path of your current notebook? The guide on the website does not help. It suggests: %scala dbutils.notebook.getContext.notebookPath res1: ...
Global web icon
stackoverflow.com
https://stackoverflow.com/questions/64883627/how-t…
How to roll back delta table to previous version - Stack Overflow
Vaccum table Table_name retain 0 hours Retain 0 hours will remove all history snapshots there is a spark config that you need to set before vaccum as by default delta logs are maintained for 7 days. spark.conf.set("spark.databricks.delta.retentionDurationCheck.enabled", False) above is the condition to set it.
Global web icon
stackoverflow.com
https://stackoverflow.com/questions/77858473/using…
databricks - Using Service Principal in Azure Devops Pipeline to run ...
Go to the target databricks job -> Job details -> Edit permissions -> add Can Manage run for the service principal. In your azure pipeline yaml, you can get the access token for service principal (the resource ID is 2ff814a6-3304-4ab8-85cb-cd0e6f879c1d), and use it for databricks cli.
Global web icon
stackoverflow.com
https://stackoverflow.com/questions/78075840/insta…
Installing multiple libraries 'permanently' on Databricks' cluster ...
Installing multiple libraries 'permanently' on Databricks' cluster Asked 1 year, 9 months ago Modified 1 year, 9 months ago Viewed 5k times
Global web icon
stackoverflow.com
https://stackoverflow.com/questions/79035989/is-th…
Is there a way to use parameters in Databricks in SQL with parameter ...
EDIT: I got a message from Databricks' employee that currently (DBR 15.4 LTS) the parameter marker syntax is not supported in this scenario. It might work in the future versions. Original question:...
Global web icon
stackoverflow.com
https://stackoverflow.com/questions/61022848/do-yo…
Do you know how to install the 'ODBC Driver 17 for SQL Server' on a ...
By default, Azure Databricks does not have ODBC Driver installed. Run the following commands in a single cell to install MS SQL ODBC Driver on Azure Databricks cluster.