Here's the least you need to know to make sense of this. The bottom box (Omnibus and Durbin-Watson and all that) are tests for the assumptions of regression; we'll talk about them in a later lesson.
The middle box has our coefficients, and then it has t-tests and p-values for the independent variables, which we can interpret more or less as usual. It also constructs 95% two-tailed confidence intervals, which are the last two columns. Here, our p values are for the two independent variables sufficiently low that statsmodels doesn't even bother displaying them. (They're not actually zero---they're never actually zero.) And, consistent with that, the confidence intervals don't include zero. (N.b., coefficients, p-values, and confidence intervals don't mean much on the intercept.)
In the first box, the most important things are the R-squared, which is a measure of the goodness of fit of our model, that is, the extent to which the model has enough information in it to account for all the variation in our dependent variable (as you should have seen in the readings). The thing about R-squared is that it always increases when you add more independent variables, and what counts as a good r-squared varies by problem domain, so it isn't all that useful except to compare two models with the same number (or very close) of independent variables and large differences in r-squared. An adjusted r-squared is a way to attempt to account for the number of variables in the model (i.e., reduce it as you add more independent variables. For more, the Minitab blog has a nice explanation of r-squared.
Another goodness of fit measure is the f-statistic which essentially is a test of the null hypothesis that all of the regression coefficients are equal to zero. Again, the Minitab blog has a nice explanation.
We'll be spending the next couple of weeks digging further into regression analysis---this is just the very basics, to get you an idea of the territory.