CAS Actions

The Cloud Analytic Services (CAS) REST APIs are for data scientists, programmers and administrators, who need to interact with CAS directly and know about CAS actions. There are API operations for executing CAS actions, managing the CAS sessions, monitoring the system and inspecting the CAS grid.

SAS Viya uses CAS to perform tasks. The smallest unit of work for the CAS server is a CAS action. CAS actions can load data, transform data, compute statistics, perform analytics and create output. Each action is configured by specifying a set of input parameters. Running a CAS action in the CAS server processes the action's parameters and the data and creates an action result.

CAS Action Image

Python Integration to SAS® Viya® - Executing SQL on Snowflake

Welcome to the continuation of my series Getting Started with Python Integration to SAS Viya. Given the exciting developments around SAS & Snowflake, I'm eager to demonstrate how to effortlessly connect Snowflake to the massively parallel processing CAS server in SAS Viya with the Python SWAT package. If you're interested [...]

Getting Started with Python Integration to SAS Viya for Predictive Modeling - Comparing Logistic Regression and Decision Tree

Comparing Logistic Regression and Decision Tree - Which of our models is better at predicting our outcome? Learn how to compare models using misclassification, area under the curve (ROC) charts, and lift charts with validation data. In part 6 and part 7 of this series we fit a logistic regression [...]

Getting Started with Python Integration to SAS Viya for Predictive Modeling - Fitting a Decision Tree

Learn how to fit a decision tree and use your decision tree model to score new data. In Part 6 of this series we took our Home Equity data saved in Part 4 and fit a logistic regression to it. In this post we will use the same data and [...]

Getting Started with Python Integration to SAS Viya for Predictive Modeling - Fitting a Logistic Regression

Learn how to fit a logistic regression and use your model to score new data. In part 4 of this series, we created and saved our modeling data set with all our updates from imputing missing values and assigning rows to training and validation data. Now we will use this [...]