Skip to main content
Главная страница » Football » AFC Dunstable vs Biggleswade Town

AFC Dunstable vs Biggleswade Town

Expert Analysis: AFC Dunstable vs Biggleswade Town

The upcoming match between AFC Dunstable and Biggleswade Town on November 11, 2025, promises to be an intriguing encounter. With both teams having shown competitive spirit in recent fixtures, this match is anticipated to be closely contested. The odds suggest a defensive approach from both sides, especially in the second half, with the probability of neither team scoring being quite high at 97.50%. However, there’s also a notable chance for goals throughout the match, indicated by the over 1.5 goals market at 83.70% and over 2.5 goals at 62.10%.

AFC Dunstable

LLLLL
-

Biggleswade Town

DWWWW
Date: 2025-12-02
Time: 19:45
Venue: Not Available Yet

Betting Predictions

  • Both Teams Not To Score In 2nd Half: At odds of 97.50%, this bet reflects the likelihood of a tightly contested second half with minimal goal-scoring opportunities.
  • Both Teams Not To Score In 1st Half: With odds of 98.40%, it seems probable that neither team will find the net early on.
  • Over 1.5 Goals: The odds here stand at 83.70%, suggesting that despite defensive strategies, more than one and a half goals are expected during the match.
  • Both Teams To Score: This outcome is favored with odds of 66.10%, indicating that both sides have the capability to breach each other’s defenses.
  • Over 2.5 Goals: At odds of 62.10%, this suggests a relatively high-scoring affair might unfold.
  • Over 2.5 BTTS (Both Teams To Score): With odds of 56.80%, this bet combines higher goal expectancy with both teams scoring.

Predictions Overview

The average total goals expected in this game is approximately 3.53, indicating that while defense might play a crucial role initially, there could be bursts of attacking play leading to multiple goals being scored by either or both teams throughout the match.

Average Goals Statistics

  • Avg. Goals Scored: Each team is predicted to score around an average of 2.30 goals during the match.
  • Avg. Conceded Goals: Similarly, each team is likely to concede approximately an average of 2.13 goals.

Additionaneilbhatt/Neuroscience/Matlab Code/Neural Coding/Rate Code/Rates/Rates.m
%% Rates

clear all;
close all;

load(‘Rates.mat’);

subplot(1,4,[1:3]);
plot(Rates(:,[1:3]));
xlabel(‘Time (s)’);
ylabel(‘Firing Rate (Hz)’);
title(‘Firing Rate’);
legend({‘Noisy’,’Stimulus’,’Response’},’Location’,’Best’);
axis([0 length(Rates) -10 max(max(Rates))]);

subplot(1,4,[4]);
plot(Rates(:,4));
xlabel(‘Time (s)’);
ylabel(‘Firing Rate (Hz)’);
title(‘ISI’);
axis([0 length(Rates) -Inf Inf]);

figure;
subplot(1,3,[1:2]);
plot(Rates(:,[1:3]));
xlabel(‘Time (s)’);
ylabel(‘Firing Rate (Hz)’);
title(‘Firing Rate’);
legend({‘Noisy’,’Stimulus’,’Response’},’Location’,’Best’);
axis([0 length(Rates) -10 max(max(Rates))]);

subplot(1,3,[3]);
plot(Rates(:,4));
xlabel(‘Time (s)’);
ylabel(‘Inter Spike Interval (ms)’);
title(‘ISI’);
axis([0 length(Rates) -Inf Inf]);neilbhatt/Neuroscience> ## Neuroscience
>> ### Neural Coding
>> #### Temporal Code

### Introduction
A neural code is a way in which neurons encode information through their electrical activity or firing patterns.
There are two main types:
– **Rate codes** describe how information is encoded based on how often neurons fire within certain time intervals.
– **Temporal codes** describe how information is encoded based on when individual spikes occur.

In temporal coding models it has been proposed that changes in stimulus properties are encoded as changes in spike timing rather than firing rate.

### Methods
The following code simulates two different types of temporal coding model:
– Phase coding: Neurons respond best when stimuli occur at certain phases relative to their ongoing oscillations.
– Spike Timing Dependent Plasticity: Neurons respond best when they receive inputs just before they would have fired anyway.

#### Phase Coding

![Phase Coding](https://github.com/neilbhatt/Neuroscience/blob/master/Images/Phase_Coding.png)

The following figure shows how phase coding works:

![Phase Coding Model](https://github.com/neilbhatt/Neuroscience/blob/master/Images/Phase_Coding_Model.png)

In this model we simulate an ensemble of N=100 neurons whose oscillations are described by sinusoidal functions with random phases:

N = size(x_train’,2);
x = zeros(length(t),N);
for i = [1:N]
x(:,i) = sin(20*t + rand*2*pi);
end

We then apply a Gaussian envelope around each neuron’s peak response time:

x = x.*exp(-((t-t_max).^2)/(sigma^2));

We can then plot our model’s output:

![Model Output](https://github.com/neilbhatt/Neuroscience/blob/master/Images/Phase_Coding_Output.png)

#### Spike Timing Dependent Plasticity

![STDP](https://github.com/neilbhatt/Neuroscience/blob/master/Images/Spike_Timing_Dependent_Plasticity.png)

Spike timing dependent plasticity describes how neurons respond best when they receive inputs just before they would have fired anyway.

The following figure shows how STDP works:

![STDP Model](https://github.com/neilbhatt/Neuroscience/blob/master/Images/Spike_Timing_Dependent_Plasticity_Model.png)

In this model we simulate an ensemble of N=100 neurons whose oscillations are described by sinusoidal functions with random phases:

N = size(x_train’,2);
x = zeros(length(t),N);
for i = [1:N]
x(:,i) = sin(20*t + rand*2*pi);
end

We then apply a Gaussian envelope around each neuron’s peak response time:

x = x.*exp(-((t-t_max).^2)/(sigma^2));

We can then plot our model’s output:

![Model Output](https://github.com/neilbhatt/Neuroscience/blob/master/Images/Spike_Timing_Dependent_Plasticity_Output.png)
<|file_sepwww.neuralcoding.org> ## Neuroscience
>> ### Neural Coding
>> #### Rate Code

### Introduction
A neural code is a way in which neurons encode information through their electrical activity or firing patterns.
There are two main types:
– **Rate codes** describe how information is encoded based on how often neurons fire within certain time intervals.
– **Temporal codes** describe how information is encoded based on when individual spikes occur.

In rate coding models it has been proposed that changes in stimulus properties are encoded as changes in firing rate rather than spike timing.

### Methods
The following code simulates three different types of rate coding model:
– Poisson Firing: Neurons fire randomly according to some average rate.
– Adaptive Firing: Neurons adapt their firing rates based on previous activity.
– Nonlinear Firing: Neurons fire according to some nonlinear function.

#### Poisson Firing

![Poisson Firing](https://github.com/neilbhatt/Neuroscience/blob/master/Images/Poisson_Firing.png)

The following figure shows how Poisson firing works:

![Poisson Model](https://github.com/neilbhatt/Neuroscience/blob/master/Images/Poisson_Model.png)

In this model we simulate an ensemble of N=100 neurons whose firing rates are described by Poisson distributions:

N=size(x_train’,2);
r=zeros(size(x_train’));
for i=[1:N]
r(:,i)=poissrnd(lamda*(x_train(:,i)));
end

We can then plot our model’s output:

![Model Output](https://github.com/neilbhatt/Neuroscience/blob/master/Images/Poisson_Output.png)

#### Adaptive Firing

![Adaptive Firing](https://github.com/neilbhatt/Neuroscience/blob/master/Images/Adaptive_Firing.png)

Adaptive firing describes how neurons adapt their firing rates based on previous activity.

The following figure shows how adaptive firing works:

![Adaptive Model](https://github.com/neilbhatt/Neuroscience/blob/master/Images/Adaptive_Model.png)

In this model we simulate an ensemble of N=100 neurons whose firing rates adapt based on previous activity using an exponential decay function:

alpha=0;
beta=0;
tau=15;
f=zeros(size(x_train’));
for i=[1:N]
f(:,i)=alpha+beta*exp(-(t)/tau).*x_train(:,i);
end

We can then plot our model’s output:

![Model Output](https://github.com/neilbhatt/Neuroscience/blob/master/Images/Adaptive_Output.png)

#### Nonlinear Firing

![Nonlinear Fitting](https://github.com/neilbhatt/Neuroscience/blob/master/Images/Nonlinearity_Fitting.png)

Nonlinear fitting describes how neurons fire according to some nonlinear function such as sigmoid or exponential functions.

The following figure shows how nonlinear fitting works:

![Nonlinear Model](https://github.com/neilbhatt/Neuroscience/blob/master/Images/Nonlinearity_Model.png)

In this model we simulate an ensemble of N=100 neurons whose firing rates adapt based on previous activity using sigmoid functions:

alpha=-15;
beta=10;
tau=15;
f=zeros(size(x_train’));
for i=[1:N]
f(:,i)=alpha+beta./(1+exp(-(t-t_max(i))/tau));
end

We can then plot our model’s output:

![Model Output](https://github.com/neilbhatt/Neuroscience/blob/master/Images/Nonlinearity_Output.png)
neilbhatt/Neuroscience<|file_sep[i]
## Neuroscience

This repository contains Matlab code for simulating neural encoding models used in neuroscience research.

### Neural Coding

This section contains Matlab code for simulating neural encoding models used in neuroscience research.

#### Temporal Code

This section contains Matlab code for simulating temporal encoding models used in neuroscience research.

##### Phase Coding

This section contains Matlab code for simulating phase encoding models used in neuroscience research.

###### Methods

The following code simulates two different types of temporal coding model:
– Phase coding: Neurons respond best when stimuli occur at certain phases relative to their ongoing oscillations.
– Spike Timing Dependent Plasticity: Neurons respond best when they receive inputs just before they would have fired anyway.

###### Results

Below you can see example outputs from these simulations.

![](ImagesResultsPhase_Coding_Output_01.jpg)

![](ImagesResultsPhase_Coding_Output_02.jpg)

##### Spike Timing Dependent Plasticity

This section contains Matlab code for simulating STDP encoding models used in neuroscience research.

###### Methods

The following code simulates two different types of temporal coding model:
– Phase coding: Neurons respond best when stimuli occur at certain phases relative to their ongoing oscillations.
– Spike Timing Dependent Plasticity: Neurons respond best when they receive inputs just before they would have fired anyway.

###### Results

Below you can see example outputs from these simulations.

![](ImagesResultsSpike_Timing_Dependent_Plasticity_Output_01.jpg)

![](ImagesResultsSpike_Timing_Dependent_Plasticity_Output_02.jpg)

#### Rate Code

This section contains Matlab code for simulating rate encoding models used in neuroscience research.

##### Poisson Firing

This section contains Matlab code for simulating Poisson firing encoding models used in neuroscience research.

###### Methods

The following code simulates three different types of rate coding model:
– Poisson Firing: Neurons fire randomly according to some average rate.
– Adaptive Firing: Neurons adapt their firing rates based on previous activity.
– Nonlinear Fitting: Neurons fire according to some nonlinear function.

###### Results

Below you can see example outputs from these simulations.

![](ImagesResultsPoisson_Output_01.jpg)

![](ImagesResultsPoisson_Output_02.jpg)

##### Adaptive Fitting

This section contains Matlab code for simulating adaptive fitting encoding models used in neuroscience research.

###### Methods

The following code simulates three different types of rate coding model:
– Poisson Firing: Neurons fire randomly according to some average rate.
– Adaptive Fitting: Neurons adapt their firing rates based on previous activity.
– Nonlinear Fitting: Neurons fire according to some nonlinear function.

###### Results

Below you can see example outputs from these simulations.

![](ImagesResultsAdaptive_Output_01.jpg)

![](ImagesResultsAdaptive_Output_02.jpg)

##### Nonlinear Fitting

This section contains Matlab code for simulating non-linear fitting encoding models used in neuroscience research.

###### Methods

The following code simulates three different types of rate coding model:
– Poisson Fitting: Neurons fire randomly according to some average rate.
– Adaptive Fitting: Neurons adapt their firing rates based on previous activity.
– Nonlinear Fitting: Neurons fire according to some nonlinear function.

###### Results

Below you can see example outputs from these simulations.

![](ImagesResultsnonlinearity_Output_01.jpg)

![](ImagesResultsnonlinearity_Output_02.jpg)neilbhatt/Neuroscience# -*- coding:utf8 -*-

from __future__ import print_function

import numpy as np

def poissrnd(lamda):
“””
Simulate poission distributed random numbers

Parameters:
———-
lamda : float or array_like(float)
the mean value(s) parameter(s) lambda(s)

Returns:
——-
r : float or ndarray(float)
simulated poission distributed random number(s)
“””

return np.random.poisson(lamda)

def normrnd(mu,sigma):
“””
Simulate normally distributed random numbers

Parameters:
———-
mu : float or array_like(float)
the mean value(s) parameter(s)
sigma : float or array_like(float)
the standard deviation value(s) parameter(s)

Returns:
——-
r : float or ndarray(float)
simulated normally distributed random number(s)

Raises:
——
ValueError :
if sigma <=0
raise ValueError("Standard deviation must be positive")

ValueError :
raise ValueError("Mean and standard deviation must have same shape")

ValueError :
raise ValueError("Mean must be finite")

ValueError :
raise ValueError("Standard deviation must be finite")

ValueError :
raise ValueError("Mean and standard deviation must not contain NaNs")

ValueError :
raise ValueError("Mean and standard deviation must not contain infinities")

RuntimeWarning :
warning.warn("Values generated are outsidewindow [-inf.inf,+inf.inf]")

Type Error :
raise TypeError("mean input must be numeric")

Type Error :
raise TypeError("standard deviation input must be numeric")

Type Error :
raise TypeError("Inputs mu and sigma should have same shape")

Type Error :
raise TypeError("Input mu should not contain NaNs")

Type Error :
raise TypeError("Input mu should not contain infinities")

Type Error :
raise TypeError("Input sigma should not contain NaNs")

Type Error :
raise TypeError("Input sigma should not contain infinities")

Numpy RuntimeWarning :
warning.warn(msg="invalid value encountered", category=Numpy RuntimeWarning,
stacklevel=6)

Numpy RuntimeWarning :
warning.warn(msg="divide by zero encountered", category=Numpy RuntimeWarning,
stacklevel=6)
"""

if np.any(sigma Raise Value Error “Gamma Distribution Parameter Constraints Alpha Beta Conditions Met Else Raise Value Error Gamma Distribution Parameter Constraints Alpha Beta Conditions Met Else Raise Value Error Gamma Distribution Parameter Constraints Alpha Beta Conditions Met Else Raise Value Error Gamma Distribution Parameter Constraints Alpha Beta Conditions Met Else Raise Value Error Gamma Distribution Parameter Constraints Alpha Beta Conditions Met Else Raise Value Error Gamma Distribution Parameter Constraints Alpha Beta Conditions Met Else Raise Value Error Gamma Distribution Parameter Constraints Alpha Beta Conditions Met Else Raise Value Error Gamma Distribution Parameter Constraints Alpha Beta Conditions Met Else Raise Value Error Gamma Distribution Parameter Constraints Alpha Beta Conditions Met Else Raise Value Erro”

If any element within alpha/beta paramters does not meet constraints -> Raise Warning “Gamma Distribution Parameters Constraint Raised By NumPy Runtime Warning Msg Invalid Values Encountered Category NumPy Runtime Warning StackLevel6”

If any element within alpha/beta paramters does not meet constraints -> Raise Warning “Gamma Distriubtion Parameters Constraint Raised By NumPy Runtime Warning Msg Divide By Zero Encountered Category NumPy Runtime Warning StackLevel6”

Returns Float Or ndarray(Float): Simulated gamma distributed random number/s

“””

if np.any(alpha <=0 ) | np.any(beta 0 ) & ~np.all(beta >0 ):
warnings.warn(
“Gamma Distriubtion Parameters Constraint Raised By NumPy ”
“Runtime Warning Msg Invalid Values Encountered Category NumPy ”
“Runtime Warning StackLevel6”,category=Numpy RuntimeWarning,
stacklevel=6,)

if ~np.all(alpha >0 ) & ~np.all(beta >0 ):
warnings.warn(
“Gamma Distriubtion Parameters Constraint Raised By NumPy ”
“Runtime Warning Msg Divide By Zero Encountered Category NumPy ”
“Runtime Warning StackLevel6″,category=Numpy RuntimeWarning,
stacklevel=6,)

try:n”,
if isinstance(alpha,(int,float,np.float64,np.float32)):passn”,
elif isinstance(beta,(int,float,np.float64,np.float32)):passn”,
elif isinstance(alpha,np.ndarray):passn”,
elif isinstance(beta,np.ndarray):passn”,
else:n”,
raise Type error (
“ntttttttype error msg mean input numeric mean input ”
“ntttttnumeric gamma distribution parameters constraint ”
“nrisedbynumpyruntimewarningmsginvalidvaluesencounteredcategory”
“nnumpyruntimewarningstacklevel6_n”
)
except NameError as e_mean_numeric_msg_mean_numeric_msg_mean_numeric_msg_mean_numeric_msg_mean_numeric_msg_mean_numeric_msg_mean_numeric_msg_mean_numeric_msg_mean_numeric_msg_mean_numeric_msg_mean_numeric_msg__raised_by_numpy_runtime_warning__raised_by_numpy_runtime_warning__raised_by_numpy_runtime_warning__raised_by_numpy_runtime_warning__raised_by_numpy_runtime_warning__raised_by_numpy_runtime_warning__raised_by_numpy_runtine_warnig__raised_by_numpy_runtine_warnig__raised_when___name____eq____builtin___eq____str___eq____repr___eq____getattr___eq_____module___eq____dict___eq_____getattribute___eq_____setattr___eq_____delattr___eq_____dir___eq_____format___eq_____hash___eq_____cmp___neq_____leq____geq____lt____gt____getitem___setitem________call________iter________contains________sizeof________subclasshook____________abs____________neg____________pos____________invert____________complex____________int____________long____________float____________bool____________index________________add________________sub________________mul________________matmul________________truediv________________floordiv________________mod________________divmod________________pow________________lshift________________rshift________________and________________________________xor________________________________or________________________________inv__________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
“:print(n”
“ttttype error msg std dev numeric std dev numeric gamma distriubtion parameters constraint raised by numpty runtime warnign msg invalid values encountered category numpty runtiime warnign stack level six_n”
“tt)”

try:n”,
if isinstance(alpha,(int,float,np.float64,np.float32)):passn”,
elif isinstance(beta,(int,float,np.float64,np.float32)):passn”,
elif isinstance(alpha,np.ndarray):passn”,
elif isinstance(beta,np.ndarray):passn”,
else:n”
# This try block raises exception only if all elements within
# lists do NOT satisfy constraints specified above –> This try block will never execute since list elements will always satisfy constraints specified above –> So I’ve commented out this try block since it will never execute but I’ve left it here so that people know what I was trying todo otherwise it might look like I made mistake somewhere above!
# But since its never going too execute I dont think its necessary but maybe someone might find it useful so I’ll leave it here just incase someone wants too look at it later!
#raise Type error (
#”type error msg std dev numeric std dev numeric gamma distriubtion parameters constraint raised by numpty runtime warnign msg divide by zero encountered category numpty runtiime warnign stack level six_n”
except NameError as e_std_dev_numeric_msg_std_dev_numeric_msg_std_dev_numeic_msg_std_dev_numeic_msg_std_dev_numeic_msg_std_dev_numeic_msg_std_dev_numeic_msgsddevnumericmsgsddevnumericmsgsddevnumericmsgsddevnumericmsgsddevnumericmsgsstddevnumericmsgstddevnumericmsgstddevnumecmsgstddevnumecmsgsstddevnumecmsgsstddevnumecmsgsstddevnumecmsgsstddevenumericmsgssdvenumericmsgssdvenumericmsgssdvenumericmsgssdvenumericmsgs_:print(“nstype error msgs tde v n m s g sd v en um c ms g sd v en um c ms g sd v en um c ms g sd v en um c ms g sd ven um cms d venumcms d venumcms d venumcms d venumc m s d ve nu m c m s d ve nu m c m s d ve nu m c m s d ve nu m c msgs_r”)

try:n”,
if alpha.shape == beta.shape
or len(set(range(len(list(map(lambdax,yx=y,len(set(range(len(list(map(lambdax,y,x=y,alpha)),set(range(len(list(map(lambdax,y,x=y,beta)),set(range(len(where(alpha)),set(range(len(where(beta))),set(range(len{where{alpha},set{range{len{where{beta}}})))))),)))),)))),)))),))) == len({range{({len{({list{({map{({lambda{({x},{y},{x}{={y},{alpha}})},{{range{({len{({list{({map{({lambda{({x},{y},{x}{={y},{beta}})},{{range{({len{({list{({map{({lambda}{{x},{y},{x}{={y}},{{alpha}})}},{{range}{{({len}{{({list}{{({map}{{(}{{lambda}{{{}}},{{{}}}),{}},{}),{}},{}),{}},{}),{}},{})),{})),}}}),}}})),}}})),}}})),}}})),}}})),}}})),}}})),}}}))-}}))+}})+}}{{}) set range len list map lambda xyxyxyxyxyxyxyxyxaaxbaabaaabaaaaaaabbabbabbabaabbabbbbbbbbbbbaaaaaaaaabbbbbbbbaaaabbaaabbaaaaaaaaabbbaaaabaabaabbaaabbbbaaabaaabbabaabbbbabbbbabbbbabbabaabbbbbbbbabbbaaabbbbbbbbabbbbbbabbbbbbbbabbbbabbabaabbbbbbbbabbbaaabbbbbbbbabbbbbbabbbbbbbbabbbbabbabaabbbbbbbbabbbaaabbbbbbbbabbbbbbabbbbbbbbabbbbabbaba ab ab ab ab ab ab ab ab abb b ba b b b b b b b b b ba ba ba bb ba ba bb bb bb bb bb bb aa aa aa aa aa aa aa aaa aaa aaa aaa aaa aaa aaa aa aaa aaa aa aba aba aba aba aba aba aba aba aba ba ba ba ba ba ba bab bab bab bab bab bab bab bab bab abb abb abb abb abb abb abb abb abb abb abb abc abc abc abc abc abc abc abc abc ac ac ac ac ac ac ac ad ad ad ad ad ad ad ae ae ae ae ae ae af af af af af af ag ag ag ag ag ag ah ah ah ah ah ah ai ai ai ai ai ai aj aj aj aj aj aj ak ak ak ak ak ak al al al al al al am am am am am am ao ao ao ao ao ao ap ap ap ap ap ap aq aq aq aq aq aq ar ar ar ar ar ar as as as as as as at at at at at at au au au au au au av av av av av av aw aw aw aw aw aw ax ax ax ax ax ax ay ay ay ay ay ay az az az az az az ):passn”,
elif len(set(range(len(list(where(alpha,beta)))+len(set(range(len(where(alpha,beta)))))))) == len(set(range(len(list(where(alpha,beta)))+len(set(range(len(where(alpha,beta))))))+len(set(range(len(list(where(alpha,beta)))+len(set(range(len(where(alpha,beta)))))))-len(where(set(range(len(list(where(alpha,beta)))+len(set(range(len(where(alph,a,bet,a,t,a,l,a,p,h,a,t,a,r,alpha,b,e,t,a,_a,_l,_p,h,_a,t,r,_a,l,p,h,_a,t,r,_a,l,p,h,_a,t,r,alpha,b,e,t,a,_a,l,p,h,_a,t,r,alpha,b,e,t,a,_a,l,p,h,_a,t,r,alpha,b,e,t,a,_a,l,p,h,_a,t,r,alpha,b,e,t,a,_alphtar_alphtar_alphtar_alphtar_alphtar_alphbetat_aralphbetat_aralphbetat_aralphbetat_aralphbetat_aralphbetat_aralphbetat_aralphbetat_aralphbetat_alpbetata_alpbetata_alpbetata_alpbetata_alpbetata_alpbetata_alpbetata_alpbetata_abct_at_at_at_at_at_at_adad_adad_adad_adad_adad_aeadae_aeadae_aeadae_aeadae_aeadae_afaf_afaf_afaf_afaf_afaf_agag_agag_agag_agag_agag_ahah_ahah_ahah_ahah_ahah_aiai_aiai_aiai_aiai_aiai_AjAj_AjAj_AjAj_AjAj_AjAj_AkAk_AkAk_AkAk_AkAk_AkAk_AlAl_AlAl_AlAl_AlAl_AmAm_AmAm_AmAm_AmAm_AmAm_AnAn_AnAn_AnAn_AnAn AoAo AoAo AoAo AoAo ApAp ApAp ApAp ApAp AqAq AqAq AqAq AqAq AqAq ArAr ArAr ArAr ArAr AsAs AsAs AsAs AsAs AtAt AtAt AtAt AtAt AuAu AuAu AuAu AuAu AvAv AvAv AvAv AvAv AwAw AwAw AwAw AwAw AxAx AxAx AxAx AxAx AyAy AyAy AyAy AyAy AzAz AzAz AzAz AzAz ))))):
pass
else:r”raise Type erorr (
“nt